User Guide

System Installation

In this section you find guidelines to install and set up NetEye in different environments: as a single node or as a cluster, and with satellites if necessary.

NetEye 4 is available as an ISO image for physical installations as part of our continuous release strategy. Please check section Acquiring NetEye ISO Image for download instructions.

Supported Virtualization Environments

NetEye ISO installation is supported in the following virtualization environments:

  • VMware

  • KVM

  • HyperV

VMware

To create a virtual machine and install NetEye on it, start VMware Workstation, click on File > New Virtual Machine, and follow these steps:

  1. Select “Custom (advanced)”, then click “Next”.

  2. Leave the defaults as they are, and click “Next”.

  3. Select “ISO image” and then the NetEye ISO you want to install. You might see the warning “Could not detect which operating system is in this image. You will need to specify which operating system will be installed”. Ignore it and click “Next”.

  4. Select Linux as the Guest OS, and specify “Red Hat Linux” in the dropdown menu. Click “Next”.

  5. Name the VM as you prefer, and select the location to store it.

  6. Specify the number of processors (recommended: 2) and click “Next.”

  7. Specify the amount of memory (recommended: 4GB), click “Next.”

  8. Select the type of connection according to your needs, click “Next.”

  9. Keep the default settings for I/O controllers, click “Next.”

  10. Select “SATA” as virtual disk type, click “Next.”

  11. Select “Create a new virtual disk”, click “Next.”

  12. Specify the disk capacity (minimum: 40GB), click “Next.”

  13. Rename the disk to a name you prefer.

  14. Review the configuration you just created, deselect “Automatically start the VM”, and click on “Finish”.

You should now proceed to section Powering up the VM.

KVM

To create a virtual machine and install NetEye on it, start the Virtual Machine Manager, click on File > New Virtual Machine to start the configuration, and follow these steps:

  1. Select “Local install media”, and click on “Forward”.

  2. Choose the NetEye ISO to install, uncheck “Automatically detect from the installation media/source” under “Choose the operating system you are installing”, and then select “CentOS 7.0” for the OS (you can also start typing in the text box to see the available OSs, or run osinfo-query os in your terminal to see all available variants). Click “Forward”.

  3. Specify the amount of memory (recommended: 4GB) and the number of processors (recommended: 2), then click “Forward”.

  4. Specify the disk capacity (minimum: 40GB), click “Forward.”

  5. Give the VM the name you prefer, and review the configuration. Unflag “Customize configuration before install”, click “Forward.”

  6. In the configuration panel that appears, go to “Boot Options” and check that Disk1 and CDRom are both selected.

  7. In the next configuration panel that appears, go to “VirIO Disk 1”, expand the Advanced options, and change the disk bus to SATA.

  8. Click on “Apply” to propagate your changes.

  9. Click on “Begin installation” to start the NetEye installation.

You should now proceed to section Powering up the VM.

HyperV

To create a virtual machine and install NetEye on it, start Hyper-V Manager, select Actions > New > Virtual Machine to start the configuration, and follow these steps:

  1. Click “Next”.

  2. Specify the name of your new VM and where to store it, then click “Next”.

  3. Leave the defaults for “Specify Generation”, click “Next”.

  4. Specify the amount of memory (recommended: 4GB), click “Next”.

  5. Select “Default switch” as the connection adapter, click “Next”.

  6. Specify the disk capacity (minimum: 40GB), click “Next”.

  7. Specify the ISO that you want to install, click “Next”.

  8. Review your settings, then click “Finish”.

  9. Before firing your new VM up, look at the list of startup media in the BIOS settings. Be sure that the CD is in the list.

  10. Click on Action > Start to start the virtual machine.

You should now proceed to section Powering up the VM.

Powering up the VM

At this point, your VM should be successfully created, and you can power it up. After a few seconds, the NetEye logo will appear, and a countdown to automatically initiate the installation will start.

After ten seconds, if no key is pressed, the installation process starts. The installation process will take several minutes to complete, after which the VM will reboot from the internal hard disk.

At the end of the boot process, you will be prompted to enter your credentials (root/admin). If the login is successful, you can now start to configure your NetEye VM.

Acquiring NetEye ISO Image

All the NetEye 4 ISO images can be found on the NetEye download site . To be sure you have downloaded a valid image, the following verification procedure must be followed.

Import the public GPG key

First download the GPG public key as a zipped archive from NetEye -> GPG public key -> public-gpg-key . Extract the archive and then import the key with the following command:

$ gpg --import public.gpg

Verify now the imported key:

$ gpg --fingerprint net.support@wuerth-phoenix.com

If the fingerprint matches the one from the NetEye blog, you have the right key installed on your system.

Download and verify

From the link above download:

  • The desired ISO file

  • The sha256sum.txt.asc file

Once you have the sha256sum.txt.asc file, you would verify it like this:

$ gpg --verify sha256sum.txt.asc

The output will look something like this:

gpg: Signature made Tue 29 Sep 2020 03:50:01 PM CEST
gpg:               using RSA key B6777D151A0C0C60
gpg: Good signature from "Wuerth Phoenix <net.support@wuerth-phoenix.com>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: E610 174A 971E 5643 BC89  A4C2 B677 7D15 1A0C 0C60

Once you have verified the signature of the sha256sum.txt.asc file, please make sure you have the ISO and the sha256sum.txt.asc file in the same directory. You can then verify the ISO file with the following command:

$ sha256sum -c sha256sum.txt.asc 2>&1 | grep OK

The output will look something like this:

neteye4.15-centos7.stable.iso: OK

At this point the ISO file is verified and it is ready to be used. In case the output is different from “OK”, the ISO image may be corrupted and needs to be downloaded once more.

Single Node

This section describes how to set up your NetEye virtual machine from scratch, and presents the NetEye 4 monitoring environment.

System Setup

NetEye 4 is delivered as a Virtual Machine. Once installed, you will need to access the VM via a terminal or ssh. The first time you log in, you will be required to change your password to a non-trivial one. To maintain a secure system, you should do this as soon as possible. The next steps are to configure your network, update NetEye, and complete the installation.

Step 1: Define the host name for the NetEye server:

# hostnamectl set-hostname {hostname.domain}
# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{ip}        {hostname.domain} {hostname}

Step 2: Define the DNS configuration:

# vim /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search {domain}
nameserver {ip1}
nameserver {ip2}

Step 3: Configure the network:

# vim /etc/sysconfig/network-scripts/ifcfg-{interface}
# Generated by parse-kickstart
IPV6INIT="yes"
DHCP_HOSTNAME="{hostname}"
IPV6_AUTOCONF="yes"
BOOTPROTO="static" # To configure according to the client
DEVICE="{interface}"
ONBOOT="yes"
IPADDR={ip}  # Configure these three only if static
NETMASK={mask}
GATEWAY={gw}

Step 4: Install the latest updates and packages for NetEye:

# yum update
# yum --enablerepo=neteye update
# yum --enablerepo=neteye groupinstall neteye

Step 5: Define an SSH key for secure communications with satellites:

# ssh-keygen -t rsa

Step 6: Set the local time zone.

Find the time zone that best matches your location, and then set it system-wide using the following commands:

# timedatectl list-timezones
# timedatectl set-timezone {Region}/{City}

Then update PHP to use that same location:

  • Create a file named /neteye/local/php/conf/php.d/30-timezone.ini

  • Insert the following text in that file: date.timezone = {Region}/{City}

  • Restart the php-fpm service: # systemctl restart php-fpm.service

Step 7: Make sure required services are running:

# systemctl start influxdb.service grafana-server.service mariadb.service

Step 8: Run the secure install script to complete NetEye setup:

# /usr/sbin/neteye_secure_install

If you would like to verify that NetEye is correctly installed, you can bring up all services and check its current status with the following commands:

# neteye start
# neteye status

Note

If your NetEye setup includes satellites, please make sure to carry out the steps in section Satellite Nodes.

Root User Password

When NetEye is first installed, the system generates a unique, random password to use when logging in to the web interface. The password is saved in a hidden file in the root directory of the machine: /root/.pwd_icingaweb2_root.

The first time you log in to the NetEye web interface, you will need to insert the following credentials:

  • User:   root

  • Password: The password you will find inside the file .pwd_icingaweb2_root.

We suggest that you change the root password to a strong one, with at least the following characteristics:

  • At least six characters long (the more characters, the stronger the password)

  • A combination of letters, numbers and symbols (@, #, $, %, etc.).

  • Both uppercase and lowercase letters

To change your password, click on the “gear” icon gear at the bottom left of NetEye, enter and confirm the new password, then click the “Update Account” button.

Cluster Nodes

NetEye 4’s clustering service is based on the RedHat 7 High Availability Clustering technologies:

  • Corosync: Provides group communication between a set of nodes, application restart upon failure, and a quorum system.

  • Pacemaker: Provides cluster management, lock management, and fencing.

  • DRBD: Provides data redundancy by mirroring devices (hard drives, partitions, logical volumes, etc.) between hosts in real time.

Cluster resources are typically quartets consisting of an internal floating IP, a DRBD device, a filesystem, and a (systemd) service.

Once you have installed clustering services according to the information on this page, please turn to the Cluster Architecture page for more information on configuration and how to update.

Prerequisites

A NetEye 4 cluster must consist of between 2 and 16 identical servers running CentOS 7. They must satisfy the following requirements:

  • Networking:

    • Bonding across NICs must be configured

    • A dedicated cluster network interface, named exactly the same on each node

    • One external static IP address which will serve as the external Cluster IP

    • One IP Address for each cluster node (i.e., N addresses)

    • One virtual (internal) subnet for internal floating service IPs (this subnet MUST NOT be reachable from any machine except cluster nodes, as it poses a security risk otherwise)

    • All nodes must know the internal IPs of all other nodes, and be reachable over the internal network (defined in /etc/hosts)

  • Storage:

    • At least one volume group with enough free storage to host all service DRBD devices defined in Services.conf

  • General:

    • All nodes must have root ssh-keys generated, and must trust each other, with the keys stored in /root/.ssh/authorized_keys

    • Internet connectivity, including the ability to reach repositories at Würth Phoenix

    • All nodes must have the yum group ‘neteye’ installed

    • All nodes must have the latest CentOS 7 and NetEye 4 updates installed

Installation Procedure

Depending on the type of nodes you are installing in your cluster, select either of the following procedures.

If your NetEye setup includes satellites, please make sure to carry out the steps in section Satellite Nodes after each node’s installation.

Basic Cluster Install

  • Define the nodes in ClusterSetup.conf (example configuration templates can be found in /usr/share/neteye/cluster/templates/). This guide will assume that you copy/configure your ClusterSetup.conf to/in /usr/share/neteye/scripts/cluster.

  • Run the cluster setup script to install a basic Corosync/Pacemaker cluster with a floating clusterIP enabled.

Note

In case of any issue which prevents the correct execution of cluster_base_setup.pl you can run again the same command adding the option --force to override. This will destroy existing cluster on the nodes.

Note

The password should be treated as a one-time password, and will not be needed after initial setup.

# cd /usr/share/neteye/scripts/cluster
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i <cluster-ip> -s <subnet_cidr> -h neteye.example.com -e <internal interface> -p <very_safe_pw>
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i 192.0.2.47 -s 24 -h neteye.example.com -e ens224 -p Secret
  • The standard Würth Phoenix fencing is IPMI Fencing for physical clusters, and vSphere fencing for virtual clusters.

Cluster Service Setup

  • Adjust all necessary IPs, ports, DRBD devices, sizes etc. in all *.tpl.conf files (found in /usr/share/neteye/cluster/templates/). In a typical configuration you need to update only ip_pre which is the the prefix of the IP (e.g. 192.168.1 for 192.168.1.0/24) which will be used to generate the virtual IP for the resource and cidr_netmask which specify the cidr of the internal subnet used by IP resources (e.g. 24 for 192.168.1.0/24).

  • Run the cluster_service_setup.pl script on each *.tpl.conf file starting from Services-core.tpl.conf:

    # cd /usr/share/neteye/scripts/cluster
    # ./cluster_service_setup.pl -c Services-core.conf.tpl
    # ./cluster_service_setup.pl -c Services-xxx.conf.tpl
    # [...]
    

    The cluster_service_setup.pl script is designed to report the last command executed in case there were any errors. If you manually fix an error, you will need to remove the successfully configured resource template from Services.conf and rerun that command. Then you should execute the cluster_service_setup.pl script again as just above in order to finalize the configuration.

NetEye Service Setup

  • Move all resources to a single node by running pcs node standby on all other nodes. This is only a first-time requirement, as many services require local write access during the initial setup procedure.

  • Run the neteye_secure_install script on the single active node

  • Run the neteye_secure_install script on every other node

  • Take all nodes out of standby by running pcs node unstandby –all*

  • Set up the Director field “API user” on slave nodes (Director ‣ Icinga Infrastructure ‣ Endpoints)

Elasticsearch Only Nodes

This section applies only if you have installed the Log Manager module, which contains Elasticsearch.

An Elasticsearch only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node. Please refer to following pages for details:

Please refer to Cluster Configuration Guidelines / Elasticsearch Only Nodes and to Elasticsearch Clusters / Elasticsearch Only Nodes for details

Voting Only Nodes

A Voting only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node.

To create a voting only node you have to create an entry of type VotingOnlyNode in the file ClusterSetup.conf as in the following example. The usage of ClusterSetup.conf is explained in Cluster Installation / Basic Cluster Install Syntax is similar to the one used for standard Nodes but note that at most one voting node can be part of the cluster and therefore a single JSON object is specified instead of an array

"VotingOnlyNode" : {
       "addr" : "192.168.47.3",
       "hostname" : "neteye03.neteyelocal",
       "hostname_ext" : "rdneteye03.si.wp.lan",
       "id" : 3
   }

Please refer to Cluster Configuration Guidelines / Voting Only Nodes

If you have installed the Log Manager module, which contains Elasticsearch, please refer also to Elasticsearch Clusters / Voting Only Nodes for details

Satellite Nodes

Prerequisites

A Satellite is a NetEye instance which depends on a main NetEye installation, the Master (which, in the case of a cluster is also the Master node), and carries out tasks such as:

  • execute Icinga 2 checks and forward results to the Master

  • collect logs and forward them to the Master

  • forward data through NATS

It is required that both the Master and the Satellite be equipped with the same NetEye version. Satellites can be arranged in tenants.

Moreover, the NATS connection between Master and Satellite is always initiated by the Satellite, so please ensure that the Networking Requirements for NATS Leaf Nodes are satisfied.

Warning

Before proceeding with the Satellite configuration procedure, if you are in a NetEye cluster environment, remember to put all nodes into standby except the one on which the configuration steps are executed (the Master).

The remainder of this section will lead you through the steps needed to configure a new NetEye Satellite.

For further information about NetEye Satellites, refer to Section Satellite.

Configuration of a Satellite

In order to create a new Satellite (we’ll call the Satellite acmesatellite), we need to create the tenant folder first (we’ll call the tenant tenant_A) /etc/neteye-satellite.d/tenant_A then create the configuration file /etc/neteye-satellite.d/tenant_A/acmesatellite.conf on the Master. The basename of the file, without the trailing .conf (acmesatellite), will be used as Satellite unique identifier in the same tenant (tenant_A).

For new Satellite installations, please adhere to the new standard that requires to always specify a tenant, creating its folder, when configuring a Satellite.

Warning

Tenant folder is mandatory even if you are not in a multitenant environment: in this case you will have a single tenant

If you have a single tenant, a good choice for the tenant name can be, for example, the company name of the tenant itself.

For existing Satellite installations, in single tenant environments where it is not expected to introduce multi tenancy in the near future, there is the possibility to use a special tenant, called master. The master tenant represents the tenant of the NetEye Master. This means that Satellites belonging to the master tenant, belong to the same tenant of the NetEye Master.

In this case, the tenant folder must be called master.

Tenant and Satellites naming and other naming conventions

Tenants and Satellites must satisfy the following requirements:

  • Satellite name must match the following regex /^[a-zA-Z0-9]{1,32}$/, i.e., it may contain only alphanumeric characters

  • Tenant name must match the following regex /^[a-zA-Z0-9_]{1,32}$/, i.e., it may contain only alphanumeric characters and underscore

  • For Satellite only: not contain the master string

  • For tenant only: not contain the icinga2 string

  • The Satellite name must be unique within a tenant, but satellites in different tenants may have the same name

  • For icinga2_zone only: match the following regex /^[[:alnum:][:blank:]_-]+$/, i.e., it must contain only alphanumeric characters, underscores, dashes, and whitespaces.

It is also suggested to use a meaningful name for each Satellite and each tenant you want to configure.

While tenants and satellite can be renamed, the procedure is not automatic and requires manual intervention.

If icinga2_zone is not defined in Satellite /etc/neteye-satellite.d/tenant_A/acmesatellite.conf then default <tenant>_<satellite name> will be used as zone name. If the user specifies the icinga2_zone attribute, then <tenant> will be prepended. If the user also specifies the attribute icinga2_tenant_in_zone_name with value false, then <tenant> is not prepended. If the tenant is the special tenant master, the <tenant> is never prepended to the zone name.

The configuration file must have the following content:

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true,
  "icinga2_zone": "acme_zone"
}

The configuration file of the Satellite must contain the following attributes:

  • fqdn: this is the Satellite fully qualified domain name.

  • name: this is the Satellite name, it must coincide with the configuration file name.

  • ssh_port: this is the port to use for SSH from Master to Satellite. Specify a different port in case of custom SSH configurations.

  • ssh_enabled: if set to true, SSH connections from Master to the Satellite can be established. If set to false, configuration files for the Satellite must be manually copied from Master.

  • icinga2_zone: this is the satellite Icinga2 high availability zone. This parameter is optional and default value is <tenant>_<satellite name>

  • icinga2_wait_for_satellite_connection: if set to false the Satellite will wait for Master to open the connection. This parameter is optional and default value is true

  • icinga2_tenant_in_zone_name: if set to false the tenant is not prepended to the Icinga2 zone name. This parameter is optional and default value is true. This parameter should be used only for existing multi-tenant installations. For this reason, its usage is strongly discouraged for new installations. If a multi-tenant installation is not required, please use the special tenant master instead

  • icinga2_endpoint_log_duration: Optional. Duration for keeping replay logs on connection loss. Defaults to 1d (86400 seconds). Attribute is specified in seconds. If log_duration is set to 0, replaying logs is disabled. You could also specify the value in human readable format like 10m for 10 minutes or 1h for one hour.

Note

Remember to append the FQDN of the Satellite in /etc/hosts. If you are in a cluster environment you must change the /etc/hosts on each node of the cluster.

If you are installing a Satellite within a cluster, run the following command to synchronise the files /etc/neteye-satellites.d/* and /etc/neteye-cluster on all cluster nodes:

neteye config cluster sync

Generate the Satellite Configuration Archive and Configure the Master

To generate the configuration files for the acmesatellite Satellite and to reconfigure Master services, run the following command on the Master:

neteye satellite config create acmesatellite

The command generates all the required configuration files for the Satellite, which are stored in /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.

The command executes the Master autosetup scripts located in /usr/share/neteye/secure_install_satellite/master/, automatically reconfiguring services to allow the interaction with the Satellite.

For example, the NATS Server is reconfigured to authorize leaf connections from acmesatellite, while streams coming from the Satellite are exported in order to be accessible from Tornado or Telegraf consumers.

Note

A pre-existing NATS Server configuration must be migrated before configuring any new Satellite. Please refer to this section for the migration procedure.

In case the same name is used for more than one satellite in different tenants, then the –tenant switch must be used to specify the desired tenant.

neteye satellite config create acmesatellite --tenant tenant_A

Note

The command neteye satellite config create computes the resulting Icinga2 Zone name at run-time, also validating the name in the process.

Please note that the resulting Zone, which can be different than the one specified via the icinga2_zone attribute, must be unique across all tenants. In case the property is not satisfied, the neteye satellite config create command triggers an error, stopping the Satellite configuration.

A new Telegraf local consumer is also automatically started and configured for each tenant, to consume metrics coming from the Satellites through NATS and to write them to InfluxDB. In our example, the Telegraf instance is called telegraf-local@neteye_consumer_influxdb_tenant_A.

Note

If you are in a cluster environment, an instance of Telegraf local consumer is started on each node of the cluster, to exploit the NATS built-in load balancing feature called distributed queue. For more information about this feature, see the official NATS documentation

The command also creates an archive containing all the configuration files, in order to easily move them to the Satellite. The archive can be found at /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.tar.gz

Alternatively, configurations can be generated for all the Satellites of all tenants defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config create --all

Synchronize the Satellite Configuration Archive

To synchronize the configuration files between the Master and the acmesatellite Satellite, provided ssh_enabled is set to TRUE, run the following command on the Master:

neteye satellite config send acmesatellite

Note

If the attribute ssh_port is not defined, the default SSH port (22) is used. if the attribute ssh_enabled is set to FALSE or not defined for a specific Satellite, the configuration archive must be manually copied before proceeding.

In case the same name is used for more than one satellite in different tenants, then the –tenant has to be used to specify the desired tenant.

neteye satellite config send acmesatellite --tenant tenant_A

The command uses the unique ID of the Satellite to retrieve the connection attributes from the Satellite configuration file /etc/neteye-satellites.d/tenant_A/acmesatellite.conf, and uses them to send the archive tenant_A-acmesatellite-satellite-config.tar.gz to the Satellite.

Alternatively, configuration archives can be sent to all Satellites defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config send --all

The configuration archives for each Satellite, belonging to a specific tenant, will be sent to the related Satellite using the following command:

neteye satellite config send --tenant tenant_A

Satellite Setup

Configure the acmesatellite Satellite with the following command on the Satellite itself:

neteye satellite setup

This command performs three actions:

  • Copies the configuration files in the correct places overriding current configurations, if any.

  • Creates a backup of the configuration for future use in /root/satellite-setup/config/<neteye_release>/satellite-config-backup-<timestamp>/

  • Execute autosetup scripts located in /usr/share/neteye/secure_install_satellite/satellite/

To execute this command the configuration archive must be located in /root/satellite-setup/config/<neteye_release>/satellite-config.tar.gz. Use neteye satellite config send command or copy the archive manually if no SSH connection is available.

Note

Configuration provided by the Master is not user customizable: any change will be overwritten by the new configuration when running neteye satellite setup

Note

Services configured via neteye satellite setup, like NATS Server, must be restarted manually in case of a reboot of the Satellite

Advanced configurations

You can have a look at NetEye Satellite Nodes for more advanced configurations of Icinga2 in the NetEye Satellite Nodes.