User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon How To
Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration NetEye Cluster on Microsoft Azure
NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Single Node Cluster Node Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis Audit Log Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes Test Cases Dashboard Use Cases Overview Architecture Authorization Kibana Elasticsearch Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Elastic APM Elastic RUM Elastic XDR Log Manager - Deprecated Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction Getting Started SATAYO Items Settings Managed Service Mitre Attack Coverage Changelog Before you start Update Procedure Single Node Upgrade from 4.42 to 4.43 Cluster Upgrade from 4.42 to 4.43 Satellite Upgrade from 4.42 to 4.43 DPO machine Upgrade from 4.42 to 4.43 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration NetEye Cluster on Microsoft Azure Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary

NetEye Cluster on Microsoft Azure

How To install a NetEye cluster on Microsoft Azure

Create and manage resources on Azure using Terraform

The files required by this guide can be found in the repository at https://github.com/WuerthPhoenix/neteye-azure-installation.

Important

To provision the infrastructure, you must have both the terraform and az (Azure) CLI tools installed on your PC.

Warning

Terraform will create a terraform.tfstate file, which contains the configuration of the resources on Azure and some credentials. It must be considered a SECRET and must not be lost.

  • The terraform files are kept in the directory /src/terraform.

  • Follow this configuration guide to setup the terraform variables, afterwards you can follow the first part of the README.md file to deploy the resources on Azure.

Terraform variables configuration

  1. Login on Azure witn az login (follow the login procedure on Azure Terraform Provider).

  2. Gather the Azure subscription ID with az account list.

  3. Create a file *.tfvars with the following content (make sure you change the variable values as you see fit):

azure_subscription_id = "<The Azure subscription ID from the previous step>"

resource_group_name  = "neteye_group"
resource_name_prefix = "neteye_terraform"
cluster_size         = 2
vm_size              = "Standard_E4as_v5"
disk_size            = 256

The variables are:

  • azure_subscription_id: the Azure subscription ID

  • resource_group_name: the name of the resource group in which the resources will be created.

  • resource_name_prefix: the prefix for the names of all the resources that will be created, including the VMs.

  • vm_hostname_template: the template to be used to generate the external hostnames of each VM. It must contain the string %02d where the number of the VM must be written (e.g. neteye%02d.test.it for VM 1 will be neteye01.test.it).

  • cluster_size: the number of virtual machines to be created.

  • vm_size: the size to be used when creating the virtual machines. Check the Azure documentation for valid values.

  • disk_size: the size of the data disk in GB.

Provision the resources

To start the provisioning process run the following command:

terraform apply --var-file "<file defined previously>.tfvars"

To get the ne_root password use:

terraform output --raw admin_password

Delete the resources

To start the deletion process — which is handy for cleanup after creating a test cluster, for example — run the following command:

terraform destroy --var-file "<file defined previously>.tfvars"

Note

Try not to change the configuration of the created resources manually, if you need to make changes modify the code and open a PR.

To correctly delete the created resources you need to run the destroy command from the same place that ran the apply command (it needs to have the same state saved in terraform.tfstate).

Configure the VMs to create a NetEye cluster

Warning

There is only one NIC per VM (thus only one subnet). For this reason you must set the NIC as Trusted:

firewall-cmd --set-default-zone trusted

You can verify by checking the presence of eth0 in the interfaces field after running the following command:

firewall-cmd --zone=trusted --list-all

The /etc/hosts file is already populated with both internal and external IPs.

1. Transform RHEL to NetEye

Enable the IPs on repo.wuerth-phoenix.com.

Note

Register with the subscription manager (for this step a dev license should be ok).

If you are < 4.43 also install network-scripts (dnf install network-scripts)

Warning

Disable SELinux:

sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
setenforce permissive

Run (on all nodes) this script: src/scripts/rhel-to-neteye.sh passing the NetEye version. For example:

rhel-to-neteye.sh 4.43

Warning

Restart the shell to populate all the new environment variables: exec bash

2. Follow NetEye Guide until Fencing

Warning

Note that the nodes start from index 00 (and not 01, i.e. neteye00.example.it).

At this point you should have more or less a VM bootstrapped with a NetEye ISO. You can follow the guide at Cluster Nodes - NetEye User Guide.

Caution

Terraform tends to override manual changes to resources if you re-run it. Be aware of this behaviour and ensure any manual steps are documented and reapplied as needed.

Please see:

Warning

When you reach the Cluster Fencing Configuration part please run dnf install fence-agents-azure-arm and follow the steps explained in this Red Hat guide to setup fencing.

Afterwards continue with the steps below.

3. Set the nic value on cluster_ip

pcs resource update cluster_ip nic=eth0

4. Edit and setup cluster templates

Note

For Non PCS-managed Services you can follow the steps on the guide.

Set the correct volume_group, and 10.1.0 as ip_pre.

Warning

Don’t change the default ip_post value.

Run the Perl script as described in the NetEye Guide.

5. Add azure-lb pcs resources

You can run the src/ansible/azure-lb-pcs-resources.yml Ansible playbook (on one node).

Warning

If you run this playbook multiple times, the last two tasks (Add cluster ip res and Add colocation) will fail on subsequent runs because the resources already exist. This is expected behavior.

6. Proceed with regular configuration

You can continue following the NetEye Guide as usual from Cluster Nodes - NetEye User Guide onwards.