User Guide

System Installation

In this section you find guidelines to install and set up NetEye in different environments: as a single node or as a cluster, and with satellites if necessary.

NetEye 4 is available as an ISO image for physical installations as part of our continuous release strategy. Please check section Acquiring NetEye ISO Image for download instructions.

Supported Virtualization Environments

NetEye ISO installation is supported in the following virtualization environments:

  • VMware

  • KVM

  • HyperV

VMware

To create a virtual machine and install NetEye on it, start VMware Workstation, click on File > New Virtual Machine, and follow these steps:

  1. Select “Custom (advanced)”, then click “Next”.

  2. Leave the defaults as they are, and click “Next”.

  3. Select “ISO image” and then the NetEye ISO you want to install. You might see the warning “Could not detect which operating system is in this image. You will need to specify which operating system will be installed”. Ignore it and click “Next”.

  4. Select Linux as the Guest OS, and specify “Red Hat Linux” in the dropdown menu. Click “Next”.

  5. Name the VM as you prefer, and select the location to store it.

  6. Specify the number of processors (recommended: 2) and click “Next.”

  7. Specify the amount of memory (recommended: 4GB), click “Next.”

  8. Select the type of connection according to your needs, click “Next.”

  9. Keep the default settings for I/O controllers, click “Next.”

  10. Select “SATA” as virtual disk type, click “Next.”

  11. Select “Create a new virtual disk”, click “Next.”

  12. Specify the disk capacity (minimum: 40GB), click “Next.”

  13. Rename the disk to a name you prefer.

  14. Review the configuration you just created, deselect “Automatically start the VM”, and click on “Finish”.

You should now proceed to section Powering up the VM.

KVM

To create a virtual machine and install NetEye on it, start the Virtual Machine Manager, click on File > New Virtual Machine to start the configuration, and follow these steps:

  1. Select “Local install media”, and click on “Forward”.

  2. Choose the NetEye ISO to install, uncheck “Automatically detect from the installation media/source” under “Choose the operating system you are installing”, and then select “CentOS 7.0” for the OS (you can also start typing in the text box to see the available OSs, or run osinfo-query os in your terminal to see all available variants). Click “Forward”.

  3. Specify the amount of memory (recommended: 4GB) and the number of processors (recommended: 2), then click “Forward”.

  4. Specify the disk capacity (minimum: 40GB), click “Forward.”

  5. Give the VM the name you prefer, and review the configuration. Unflag “Customize configuration before install”, click “Forward.”

  6. In the configuration panel that appears, go to “Boot Options” and check that Disk1 and CDRom are both selected.

  7. In the next configuration panel that appears, go to “VirIO Disk 1”, expand the Advanced options, and change the disk bus to SATA.

  8. Click on “Apply” to propagate your changes.

  9. Click on “Begin installation” to start the NetEye installation.

You should now proceed to section Powering up the VM.

HyperV

To create a virtual machine and install NetEye on it, start Hyper-V Manager, select Actions > New > Virtual Machine to start the configuration, and follow these steps:

  1. Click “Next”.

  2. Specify the name of your new VM and where to store it, then click “Next”.

  3. Leave the defaults for “Specify Generation”, click “Next”.

  4. Specify the amount of memory (recommended: 4GB), click “Next”.

  5. Select “Default switch” as the connection adapter, click “Next”.

  6. Specify the disk capacity (minimum: 40GB), click “Next”.

  7. Specify the ISO that you want to install, click “Next”.

  8. Review your settings, then click “Finish”.

  9. Before firing your new VM up, look at the list of startup media in the BIOS settings. Be sure that the CD is in the list.

  10. Click on Action > Start to start the virtual machine.

You should now proceed to section Powering up the VM.

Powering up the VM

At this point, your VM should be successfully created, and you can power it up. After a few seconds, the NetEye logo will appear, and a countdown to automatically initiate the installation will start.

After ten seconds, if no key is pressed, the installation process starts. The installation process will take several minutes to complete, after which the VM will reboot from the internal hard disk.

At the end of the boot process, you will be prompted to enter your credentials (root/admin). If the login is successful, you can now start to configure your NetEye VM.

Acquiring NetEye ISO Image

All the NetEye 4 ISO images can be found on the NetEye download site . To be sure you have downloaded a valid image, the following verification procedure must be followed.

Import the public GPG key

First download the GPG public key as a zipped archive from NetEye -> GPG public key -> public-gpg-key . Extract the archive and then import the key with the following command:

$ gpg --import public.gpg

Verify now the imported key:

$ gpg --fingerprint net.support@wuerth-phoenix.com

If the fingerprint matches the one from the NetEye blog, you have the right key installed on your system.

Download and verify

From the link above download:

  • The desired ISO file

  • The sha256sum.txt.asc file

Once you have the sha256sum.txt.asc file, you would verify it like this:

$ gpg --verify sha256sum.txt.asc

The output will look something like this:

gpg: Signature made Tue 29 Sep 2020 03:50:01 PM CEST
gpg:               using RSA key B6777D151A0C0C60
gpg: Good signature from "Wuerth Phoenix <net.support@wuerth-phoenix.com>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: E610 174A 971E 5643 BC89  A4C2 B677 7D15 1A0C 0C60

Once you have verified the signature of the sha256sum.txt.asc file, please make sure you have the ISO and the sha256sum.txt.asc file in the same directory. You can then verify the ISO file with the following command:

$ sha256sum -c sha256sum.txt.asc 2>&1 | grep OK

The output will look something like this:

neteye4.15-centos7.stable.iso: OK

At this point the ISO file is verified and it is ready to be used. In case the output is different from “OK”, the ISO image may be corrupted and needs to be downloaded once more.

Single Nodes and Satellites

This section describes how to set up your NetEye virtual machine from scratch, and presents the NetEye 4 monitoring environment.

System Setup

NetEye 4 is delivered as a Virtual Machine. Once installed, you will need to access the VM via a terminal or ssh. The first time you log in, you will be required to change your password to a non-trivial one. To maintain a secure system, you should do this as soon as possible. The next steps are to configure your network, update NetEye, and complete the installation.

This procedure is split into two parts: The first part applies to both Single Nodes and Satellite Nodes, while the second to Single Nodes installations only.

Note

Curly braces ({ }) mark values that must be inserted according to the local infrastructure. For example, {hostname.domain} should be replaced with the actual hostname given to the node and domain with the local domain.

Part 1: Single Nodes and Satellite Nodes

Step 1: Define the host name for the NetEye instance.

# hostnamectl set-hostname {hostname.domain}
# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{ip}        {hostname.domain} {hostname}

Step 2: Define the DNS configuration.

# vim /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search {domain}
nameserver {ip1}
nameserver {ip2}

Step 3: Configure the network.

# vim /etc/sysconfig/network-scripts/ifcfg-{interface}
# Generated by parse-kickstart
IPV6INIT="yes"
DHCP_HOSTNAME="{hostname}"
IPV6_AUTOCONF="yes"
BOOTPROTO="static" # To configure according to the client
DEVICE="{interface}"
ONBOOT="yes"
IPADDR={ip}  # Configure these three only if static
NETMASK={mask}
GATEWAY={gw}

Step 4: Install the latest updates and packages for NetEye.

# yum update
# yum --enablerepo=neteye update
# yum --enablerepo=neteye groupinstall neteye

Step 5: Define an SSH key for secure communications with Satellites.

# ssh-keygen -t rsa

Step 6: Set the local time zone.

Find the time zone that best matches your location:

# timedatectl list-timezones

Set it system-wide using the following command:

# timedatectl set-timezone {Region}/{City}

Then update PHP to use that same location:

  • Create a file named /neteye/local/php/conf/php.d/30-timezone.ini

  • Insert the following text in that file: date.timezone = {Region}/{City}

  • Restart the php-fpm service: # systemctl restart php-fpm.service

Note

If your are setting up a NetEye Satellite, skip the next section and make sure to carry out the steps in section Satellite Nodes Only.

Part 2: Single Nodes Only

Step 7: Make sure required services are running.

# systemctl start grafana-server.service mariadb.service

Step 8: Complete NetEye setup.

Run the secure install script:

# /usr/sbin/neteye_secure_install

If you would like to verify that NetEye is correctly installed, you can bring up all services and check its current status with the following commands.

# neteye start

# neteye status

Root User Password

When NetEye is first installed, the system generates a unique, random password to use when logging in to the web interface. The password is saved in a hidden file in the root directory of the machine: /root/.pwd_icingaweb2_root.

The first time you log in to the NetEye web interface, you will need to insert the following credentials:

  • User:   root

  • Password: The password you will find inside the file .pwd_icingaweb2_root.

We suggest that you change the root password to a strong one, with at least the following characteristics:

  • At least six characters long (the more characters, the stronger the password)

  • A combination of letters, numbers and symbols (@, #, $, %, etc.).

  • Both uppercase and lowercase letters

To change your password, click on the “gear” icon gear at the bottom left of NetEye, enter and confirm the new password, then click the “Update Account” button.

Cluster Nodes

NetEye 4’s clustering service is based on the RedHat 7 High Availability Clustering technologies:

  • Corosync: Provides group communication between a set of nodes, application restart upon failure, and a quorum system.

  • Pacemaker: Provides cluster management, lock management, and fencing.

  • DRBD: Provides data redundancy by mirroring devices (hard drives, partitions, logical volumes, etc.) between hosts in real time.

Cluster resources are typically quartets consisting of an internal floating IP, a DRBD device, a filesystem, and a (systemd) service.

Once you have installed clustering services according to the information on this page, please turn to the Cluster Architecture page for more information on configuration and how to update.

Prerequisites

A NetEye 4 cluster must consist of between 2 and 16 identical servers running CentOS 7. They must satisfy the following requirements:

  • Networking:

    • Bonding across NICs must be configured

    • A dedicated cluster network interface, named exactly the same on each node

    • One external static IP address which will serve as the external Cluster IP

    • One IP Address for each cluster node (i.e., N addresses)

    • One virtual (internal) subnet for internal floating service IPs (this subnet MUST NOT be reachable from any machine except cluster nodes, as it poses a security risk otherwise)

    • All nodes must know the internal IPs of all other nodes, and be reachable over the internal network (defined in /etc/hosts)

  • Storage:

    • At least one volume group with enough free storage to host all service DRBD devices defined in Services.conf

  • General:

    • All nodes must have root ssh-keys generated, and must trust each other, with the keys stored in /root/.ssh/authorized_keys

    • Internet connectivity, including the ability to reach repositories at Würth Phoenix

    • All nodes must have the yum group ‘neteye’ installed

    • All nodes must have the latest CentOS 7 and NetEye 4 updates installed

Installation Procedure

Depending on the type of nodes you are installing in your cluster, select either of the following procedures.

If your NetEye setup includes satellites, please make sure to carry out the steps in section Satellite Nodes Only after each node’s installation.

Basic Cluster Install

  • Define the nodes in ClusterSetup.conf (example configuration templates can be found in /usr/share/neteye/cluster/templates/). This guide will assume that you copy/configure your ClusterSetup.conf to/in /usr/share/neteye/scripts/cluster.

  • Run the cluster setup script to install a basic Corosync/Pacemaker cluster with a floating clusterIP enabled.

Note

In case of any issue which prevents the correct execution of cluster_base_setup.pl you can run again the same command adding the option --force to override. This will destroy existing cluster on the nodes.

Note

The password should be treated as a one-time password, and will not be needed after initial setup.

# cd /usr/share/neteye/scripts/cluster
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i <cluster-ip> -s <subnet_cidr> -h neteye.example.com -e <internal interface> -p <very_safe_pw>
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i 192.0.2.47 -s 24 -h neteye.example.com -e ens224 -p Secret
  • The standard Würth Phoenix fencing is IPMI Fencing for physical clusters, and vSphere fencing for virtual clusters.

Cluster Service Setup

  • Adjust all necessary IPs, ports, DRBD devices, sizes etc. in all *.tpl.conf files (found in /usr/share/neteye/cluster/templates/). In a typical configuration you need to update only ip_pre which is the the prefix of the IP (e.g. 192.168.1 for 192.168.1.0/24) which will be used to generate the virtual IP for the resource and cidr_netmask which specify the cidr of the internal subnet used by IP resources (e.g. 24 for 192.168.1.0/24).

  • Run the cluster_service_setup.pl script on each *.tpl.conf file starting from Services-core.tpl.conf:

    # cd /usr/share/neteye/scripts/cluster
    # ./cluster_service_setup.pl -c Services-core.conf.tpl
    # ./cluster_service_setup.pl -c Services-xxx.conf.tpl
    # [...]
    

    The cluster_service_setup.pl script is designed to report the last command executed in case there were any errors. If you manually fix an error, you will need to remove the successfully configured resource template from Services.conf and rerun that command. Then you should execute the cluster_service_setup.pl script again as just above in order to finalize the configuration.

NetEye Service Setup

  • Move all resources to a single node by running pcs node standby on all other nodes. This is only a first-time requirement, as many services require local write access during the initial setup procedure.

  • Run the neteye_secure_install script on the single active node

  • Run the neteye_secure_install script on every other node

  • Take all nodes out of standby by running pcs node unstandby –all*

  • Set up the Director field “API user” on slave nodes (Director ‣ Icinga Infrastructure ‣ Endpoints)

Elasticsearch Only Nodes

This section applies only if you have installed the Log Manager module, which contains Elasticsearch.

An Elasticsearch only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node. Please refer to following pages for details:

Please refer to Cluster Configuration Guidelines / Elasticsearch Only Nodes and to Elasticsearch Clusters / Elasticsearch Only Nodes for details

Voting Only Nodes

A Voting only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node.

To create a voting only node you have to create an entry of type VotingOnlyNode in the file ClusterSetup.conf as in the following example. The usage of ClusterSetup.conf is explained in Cluster Installation / Basic Cluster Install Syntax is similar to the one used for standard Nodes but note that at most one voting node can be part of the cluster and therefore a single JSON object is specified instead of an array

"VotingOnlyNode" : {
       "addr" : "192.168.47.3",
       "hostname" : "neteye03.neteyelocal",
       "hostname_ext" : "rdneteye03.si.wp.lan",
       "id" : 3
   }

Please refer to Cluster Configuration Guidelines / Voting Only Nodes

If you have installed the Log Manager module, which contains Elasticsearch, please refer also to Elasticsearch Clusters / Voting Only Nodes for details

Satellite Nodes Only

Prerequisites

A Satellite is a NetEye instance which depends on a main NetEye installation, the Master (which, in the case of a cluster is also the Master node), and carries out tasks such as:

  • execute Icinga 2 checks and forward results to the Master

  • collect logs and forward them to the Master

  • forward data through NATS

  • collect data through Tornado Collectors and forward them to the Master to be processed by Tornado

It is required that both the Master and the Satellite be equipped with the same NetEye version. Satellites can be arranged in tenants.

Moreover, the NATS connection between Master and Satellite is always initiated by the Satellite, so please ensure that the Networking Requirements for NATS Leaf Nodes are satisfied.

Warning

Before proceeding with the Satellite configuration procedure, if you are in a NetEye cluster environment, check that all resource are in started status.

The remainder of this section will lead you through the steps needed to configure a new NetEye Satellite.

For further information about NetEye Satellites, refer to Section Satellite.

Configuration of a Satellite

The configuration of a Satellite is carried out in two phases. Phase one consists of the basic networking setup, that can be carried out by following steps 1 to 6 (i.e., Part 1) of System Setup. Phase two consists of the remainder of this section.

Warning

Never run the neteye_secure_install command on Satellite Nodes, because this would remove all Satellite configuration. You will therefore end up with a NetEye Single Node instead!

We will use the following notation when configuring a NetEye Satellite.

  • the domain is example.com

  • the tenant is called tenant_A

  • the Satellite is called acmesatellite

This notation will be used also for the Configuration of a Second Satellite in Existing Icinga2 Zone and whenever a NetEye Satellite configuration is mentioned.

In order to create a new Satellite (acmesatellite), on the Master we need first to create the folder for the tenant (tenant_A, hence /etc/neteye-satellite.d/tenant_A), then create the configuration file /etc/neteye-satellite.d/tenant_A/acmesatellite.conf.

Note

The basename of the file, without the trailing .conf (i.e., acmesatellite), will be used as Satellite unique identifier in the same tenant.

For new Satellite installations, please adhere to the new standard that requires to always specify a tenant, creating its folder, when configuring a Satellite.

Warning

Tenant folder is mandatory even if you are not in a multitenant environment: in this case you will have a single tenant

If you have a single tenant, a good choice for the tenant name can be, for example, the company name of the tenant itself.

For existing Satellite installations, in single tenant environments where it is not expected to introduce multi tenancy in the near future, there is the possibility to use a special tenant, called master. The master tenant represents the tenant of the NetEye Master. This means that Satellites belonging to the master tenant, belong to the same tenant of the NetEye Master.

In this case, the tenant folder must be called master.

The configuration, including all optional parameters, should look similar to this excerpt.

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true,
  "icinga2_zone": "acme_zone",
  "icinga2_wait_for_satellite_connection": false,
  "icinga2_tenant_in_zone_name": true,
  "proxy": {
    "ssl_protocol": "TLSv1 TLSv1.1 TLSv1.2",
    "ssl_cipher_suite": "!SEED:!IDEA"
  }
}

The configuration file of the Satellite must contain the following attributes:

  • fqdn: this is the Satellite fully qualified domain name.

  • name: this is the Satellite name, it must coincide with the configuration file name.

  • ssh_port: this is the port to use for SSH from Master to Satellite. Specify a different port in case of custom SSH configurations.

  • ssh_enabled: if set to true, SSH connections from Master to the Satellite can be established. If set to false, configuration files for the Satellite must be manually copied from Master.

  • icinga2_zone: this is the satellite Icinga2 high availability zone. This parameter is optional and default value is <tenant>_<satellite name>

  • icinga2_wait_for_satellite_connection: if set to false the Satellite will wait for Master to open the connection. This parameter is optional and default value is true

  • icinga2_tenant_in_zone_name: if set to false the tenant is not prepended to the Icinga2 zone name. This parameter is optional and default value is true. This parameter should be used only for existing multi-tenant installations. For this reason, its usage is strongly discouraged for new installations. If a multi-tenant installation is not required, please use the special tenant master instead

  • proxy.ssl_protocol: this is the set of protocols allowed in NGINX. This parameter is optional and its default value is TLSv1 TLSv1.1 TLSv1.2. Change this to either improve security or to allow older protocols for backward compatibility.

  • proxy.ssl_cipher_suite: this is the cypher suite allowed in NGINX. This parameter is optional and its default value is HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA. Change this to either improve security or to allow older cyphers for backward compatibility.

  • icinga2_endpoint_log_duration: this is the amount of time for which replay logs are kept on connection loss. It corresponds to log_duration when defining Icinga2 endpoints, as described in Icinga2 official documentation This parameter is optional and, if not set, it will take Icinga2 defaults (1d or 86400s).

Note

Remember to append the FQDN of the Satellite in /etc/hosts. If you are in a cluster environment you must change the /etc/hosts on each node of the cluster.

If you are installing a Satellite within a cluster, run the following command to synchronise the files /etc/neteye-satellites.d/* and /etc/neteye-cluster on all cluster nodes:

neteye config cluster sync

Generate the Satellite Configuration Archive and Configure the Master

To generate the configuration files for the acmesatellite Satellite and to reconfigure Master services, run the following command on the Master:

neteye satellite config create acmesatellite

The command generates all the required configuration files for the Satellite, which are stored in /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.

Warning

On a cluster, this command will temporarily put all the cluster resources in unmanaged state. This means that pcs will not take care of handling clusterized services until a valid configuration is successfully created. In case of error during the execution of the neteye satellite config create command the cluster is left in unmanaged state to avoid downtimes.

If this happens the user is required to:

  • fix the errors

  • run again the command neteye satellite config create

The command executes the Master autosetup scripts located in /usr/share/neteye/secure_install_satellite/master/, automatically reconfiguring services to allow the interaction with the Satellite.

For example, the NATS Server is reconfigured to authorize leaf connections from acmesatellite, while streams coming from the Satellite are exported in order to be accessible from Tornado or Telegraf consumers.

Note

A pre-existing NATS Server configuration must be migrated before configuring any new Satellite. Please refer to this section for the migration procedure.

In case the same name is used for more than one satellite in different tenants, then the –tenant switch must be used to specify the desired tenant.

neteye satellite config create acmesatellite --tenant tenant_A

Note

The command neteye satellite config create computes the resulting Icinga2 Zone name at run-time, also validating the name in the process.

The resulting Zone, which can be different than the one specified via the icinga2_zone attribute, must be unique across all tenants. In case the property is not satisfied, the neteye satellite config create command triggers an error, stopping the Satellite configuration.

A new Telegraf local consumer is also automatically started and configured for each tenant, to consume metrics coming from the Satellites through NATS and to write them to InfluxDB. In our example, the Telegraf instance is called telegraf-local@neteye_consumer_influxdb_tenant_A.

Note

If you are in a cluster environment, an instance of Telegraf local consumer is started on each node of the cluster, to exploit the NATS built-in load balancing feature called distributed queue. For more information about this feature, see the official NATS documentation

The command also creates an archive containing all the configuration files, in order to easily move them to the Satellite. The archive can be found at /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.tar.gz

Alternatively, configurations can be generated for all the Satellites of all tenants defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config create --all

Synchronize the Satellite Configuration Archive

To synchronize the configuration files between the Master and the acmesatellite Satellite, provided ssh_enabled is set to TRUE, run the following command on the Master:

neteye satellite config send acmesatellite

Note

If the attribute ssh_port is not defined, the default SSH port (22) is used. if the attribute ssh_enabled is set to FALSE or not defined for a specific Satellite, the configuration archive must be manually copied before proceeding.

In case the same name is used for more than one satellite in different tenants, then the –tenant has to be used to specify the desired tenant.

neteye satellite config send acmesatellite --tenant tenant_A

The command uses the unique ID of the Satellite to retrieve the connection attributes from the Satellite configuration file /etc/neteye-satellites.d/tenant_A/acmesatellite.conf, and uses them to send the archive tenant_A-acmesatellite-satellite-config.tar.gz to the Satellite.

Alternatively, configuration archives can be sent to all Satellites defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config send --all

The configuration archives for each Satellite, belonging to a specific tenant, will be sent to the related Satellite using the following command:

neteye satellite config send --tenant tenant_A

Satellite Setup

Configure the acmesatellite Satellite with the following command on the Satellite itself:

neteye satellite setup

This command performs three actions:

  • Copies the configuration files in the correct places overriding current configurations, if any.

  • Creates a backup of the configuration for future use in /root/satellite-setup/config/<neteye_release>/satellite-config-backup-<timestamp>/

  • Executes autosetup scripts located in /usr/share/neteye/secure_install_satellite/satellite/

To execute this command the configuration archive must be located in /root/satellite-setup/config/<neteye_release>/satellite-config.tar.gz. Use neteye satellite config send command or copy the archive manually if no SSH connection is available.

Note

Configuration provided by the Master is not user customizable: any change will be overwritten by the new configuration when running neteye satellite setup

Advanced configurations

Adding a Service to the Satellite Target

Adding a systemd service to neteye-satellite.target (see Satellite Services), can be useful in the scenario where a custom systemd service needs to be managed together with the other services of the Satellite.

The main use case for this necessity is that the Satellite admins want that a service that they created will be always started automatically when the Satellite Node reboots.

To attach a new systemd service to the Satellite Target you can use the command neteye satellite service add. For example, if you want to add the service telegraf-local@my_custom_instance.service to the neteye-satellite.target you can execute:

neteye satellite service add telegraf-local@my_custom_instance

Warning

The service name passed to the neteye satellite service add command must not contain the .service suffix, which would lead to an incorrect configuration.

The command neteye satellite service remove allows instead to remove a service from the Satellite Target. Removing a service from the Satellite Target can be useful if you previously added a custom service by mistake and must not be used to remove a NetEye service from the Satellite Target.

For example, to remove telegraf-local@my_custom_instance.service from the neteye-satellite.target you can execute:

neteye satellite service remove telegraf-local@my_custom_instance

We remind you that you can verify which services are attached to the Satellite Target with the command:

systemctl list-dependencies neteye-satellite.target

Icinga2 advanced configuration

You can have a look at NetEye Satellite Nodes for more advanced configurations of Icinga2 in the NetEye Satellite Nodes.

NGINX advanced configurations

NGINX is installed and enabled by default on Satellites and is responsible to expose local services, like Tornado Webhook collector, and to perform TLS termination. NGINX can be customised to some extent, to be employed in other scenarios like those described below.

Change NGINX Certificates

By default NGINX is configured with self-signed certificates generated at Satellite side. To use your own certificates you must not change the NGINX configuration, but you can overwrite the existing self-signed certificates in the following locations:

  • Certificate: it is mandatory and located in /neteye/local/nginx/conf/tls/certs/neteye_cert.crt

  • Key: it is mandatory and located in /neteye/local/nginx/conf/tls/private/neteye.key

  • CA or CA bundle: it is mandatory and located in /neteye/local/nginx/conf/tls/certs/neteye_ca_bundle.crt

Setup a Reverse Proxy for Https Resource

In this scenario we assume that you want to forward all HTTPS requests for neteyeshare to the master.

If you are familiar with Httpd, the corresponding configuration would look like this:

ProxyPass /neteyeshare https://neteye4master.example.it/neteyeshare
ProxyPassReverse /neteyeshare https://neteye4master.example.it/neteyeshare

To configure NGINX as a reverse proxy you should create file /neteye/local/nginx/conf/conf.d/http/locations/neteyeshare.conf with the following content:

location /neteyeshare/ {
    proxy_set_header X-Forwarded-Host $host:$server_port;
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass https://neteye4master.example.it/neteyeshare;
}

You need to restart NGINX to apply changes.

Setup a Server in NGINX

By default NGINX on Satellites listen only to port 443. It is possible to start a new server to listen on a different port, for example to set it up as reverse proxy.

In this case you need to create a new file /neteye/local/nginx/conf/conf.d/http/my_custom_server.conf with the following content:

server {
  listen 80;
  server_name my_custom_server;
  location /api/v1/ {
      proxy_set_header X-Forwarded-Host $host:$server_port;
      proxy_set_header X-Forwarded-Server $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_pass http://127.0.0.1:8080/api/v1;
  }
}

You need to restart NGINX to apply changes.

Change SSL settings

Unlike the previous scenarios, this settings must be configured and applied on the Master; then you need to follow the instructions in sections Generate the Satellite Configuration Archive and Configure the Master, Synchronize the Satellite Configuration Archive and Satellite Setup, to deploy configuration on Satellite.

To change NGINX SSL settings you can change optional parameters proxy.ssl_protocol and proxy.ssl_cipher_suite described in NetEye Satellite Configuration.

Suppose the Satellite configuration /etc/neteye-satellite.d/tenant_A/acmesatellite.conf is the following:

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true
}

Let’s suppose you want to setup NGINX to support TLSv1.2 only. You have just to set your satellite configuration file /etc/neteye-satellite.d/tenant_A/acmesatellite.conf file as follows:

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true,
  "proxy": {
    "ssl_protocol": "TLSv1.2"
  }
}