System Installation¶
In this section you find guidelines to install and set up NetEye in different environments: as a single node or as a cluster, and with satellites if necessary.
NetEye 4 is available as an ISO image for physical installations as part of our continuous release strategy. Please check section Acquiring NetEye ISO Image for download instructions.
Supported Virtualization Environments¶
NetEye ISO installation is supported in the following virtualization environments:
VMware
KVM
HyperV
VMware¶
To create a virtual machine and install NetEye on it, start VMware Workstation, click on File > New Virtual Machine, and follow these steps:
Select “Custom (advanced)”, then click “Next”.
Leave the defaults as they are, and click “Next”.
Select “ISO image” and then the NetEye ISO you want to install. You might see the warning “Could not detect which operating system is in this image. You will need to specify which operating system will be installed”. Ignore it and click “Next”.
Select Linux as the Guest OS, and specify “Red Hat Linux” in the dropdown menu. Click “Next”.
Name the VM as you prefer, and select the location to store it.
Specify the number of processors (recommended: 2) and click “Next.”
Specify the amount of memory (recommended: 4GB), click “Next.”
Select the type of connection according to your needs, click “Next.”
Keep the default settings for I/O controllers, click “Next.”
Select “SATA” as virtual disk type, click “Next.”
Select “Create a new virtual disk”, click “Next.”
Specify the disk capacity (minimum: 40GB), click “Next.”
Rename the disk to a name you prefer.
Review the configuration you just created, deselect “Automatically start the VM”, and click on “Finish”.
You should now proceed to section Powering up the VM.
KVM¶
To create a virtual machine and install NetEye on it, start the Virtual Machine Manager, click on File > New Virtual Machine to start the configuration, and follow these steps:
Select “Local install media”, and click on “Forward”.
Choose the NetEye ISO to install, uncheck “Automatically detect from the installation media/source” under “Choose the operating system you are installing”, and then select “CentOS 7.0” for the OS (you can also start typing in the text box to see the available OSs, or run osinfo-query os in your terminal to see all available variants). Click “Forward”.
Specify the amount of memory (recommended: 4GB) and the number of processors (recommended: 2), then click “Forward”.
Specify the disk capacity (minimum: 40GB), click “Forward.”
Give the VM the name you prefer, and review the configuration. Unflag “Customize configuration before install”, click “Forward.”
In the configuration panel that appears, go to “Boot Options” and check that Disk1 and CDRom are both selected.
In the next configuration panel that appears, go to “VirIO Disk 1”, expand the Advanced options, and change the disk bus to SATA.
Click on “Apply” to propagate your changes.
Click on “Begin installation” to start the NetEye installation.
You should now proceed to section Powering up the VM.
HyperV¶
To create a virtual machine and install NetEye on it, start Hyper-V Manager, select Actions > New > Virtual Machine to start the configuration, and follow these steps:
Click “Next”.
Specify the name of your new VM and where to store it, then click “Next”.
Leave the defaults for “Specify Generation”, click “Next”.
Specify the amount of memory (recommended: 4GB), click “Next”.
Select “Default switch” as the connection adapter, click “Next”.
Specify the disk capacity (minimum: 40GB), click “Next”.
Specify the ISO that you want to install, click “Next”.
Review your settings, then click “Finish”.
Before firing your new VM up, look at the list of startup media in the BIOS settings. Be sure that the CD is in the list.
Click on Action > Start to start the virtual machine.
You should now proceed to section Powering up the VM.
Powering up the VM¶
At this point, your VM should be successfully created, and you can power it up. After a few seconds, the NetEye logo will appear, and a countdown to automatically initiate the installation will start.
After ten seconds, if no key is pressed, the installation process starts. The installation process will take several minutes to complete, after which the VM will reboot from the internal hard disk.
At the end of the boot process, you will be prompted to enter your credentials (root/admin). If the login is successful, you can now start to configure your NetEye VM.
Acquiring NetEye ISO Image¶
All the NetEye 4 ISO images can be found on the NetEye download site . To be sure you have downloaded a valid image, the following verification procedure must be followed.
Import the public GPG key¶
First download the GPG public key as a zipped archive from NetEye -> GPG public key -> public-gpg-key . Extract the archive and then import the key with the following command:
$ gpg --import public.gpg
Verify now the imported key:
$ gpg --fingerprint net.support@wuerth-phoenix.com
If the fingerprint matches the one from the NetEye blog, you have the right key installed on your system.
Download and verify¶
From the link above download:
The desired ISO file
The sha256sum.txt.asc file
Once you have the sha256sum.txt.asc file, you would verify it like this:
$ gpg --verify sha256sum.txt.asc
The output will look something like this:
gpg: Signature made Tue 29 Sep 2020 03:50:01 PM CEST
gpg: using RSA key B6777D151A0C0C60
gpg: Good signature from "Wuerth Phoenix <net.support@wuerth-phoenix.com>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: E610 174A 971E 5643 BC89 A4C2 B677 7D15 1A0C 0C60
Once you have verified the signature of the sha256sum.txt.asc file, please make sure you have the ISO and the sha256sum.txt.asc file in the same directory. You can then verify the ISO file with the following command:
$ sha256sum -c sha256sum.txt.asc 2>&1 | grep OK
The output will look something like this:
neteye4.15-centos7.stable.iso: OK
At this point the ISO file is verified and it is ready to be used. In case the output is different from “OK”, the ISO image may be corrupted and needs to be downloaded once more.
Single Node¶
This section describes how to set up your NetEye virtual machine from scratch, and presents the NetEye 4 monitoring environment.
System Setup¶
NetEye 4 is delivered as a Virtual Machine. Once installed, you will need to access the VM via a terminal or ssh. The first time you log in, you will be required to change your password to a non-trivial one. To maintain a secure system, you should do this as soon as possible. The next steps are to configure your network, update NetEye, and complete the installation.
Step 1: Define the host name for the NetEye server:
# hostnamectl set-hostname {hostname.domain}
# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
{ip} {hostname.domain} {hostname}
Step 2: Define the DNS configuration:
# vim /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search {domain}
nameserver {ip1}
nameserver {ip2}
Step 3: Configure the network:
# vim /etc/sysconfig/network-scripts/ifcfg-{interface}
# Generated by parse-kickstart
IPV6INIT="yes"
DHCP_HOSTNAME="{hostname}"
IPV6_AUTOCONF="yes"
BOOTPROTO="static" # To configure according to the client
DEVICE="{interface}"
ONBOOT="yes"
IPADDR={ip} # Configure these three only if static
NETMASK={mask}
GATEWAY={gw}
Step 4: Install the latest updates and packages for NetEye:
# yum update
# yum --enablerepo=neteye update
# yum --enablerepo=neteye groupinstall neteye
Step 5: Define an SSH key for secure communications with satellites:
# ssh-keygen -t rsa
Step 6: Set the local time zone.
Find the time zone that best matches your location, and then set it system-wide using the following commands:
# timedatectl list-timezones
# timedatectl set-timezone {Region}/{City}
Then update PHP to use that same location:
Create a file named /neteye/local/php/conf/php.d/30-timezone.ini
Insert the following text in that file:
date.timezone = {Region}/{City}
Restart the php-fpm service:
# systemctl restart php-fpm.service
Step 7: Make sure required services are running:
# systemctl start influxdb.service grafana-server.service mariadb.service
Step 8: Run the secure install script to complete NetEye setup:
# /usr/sbin/neteye_secure_install
If you would like to verify that NetEye is correctly installed, you can bring up all services and check its current status with the following commands:
# neteye start
# neteye status
Note
If your NetEye setup includes satellites, please make sure to carry out the steps in section Satellite Nodes.
Root User Password¶
When NetEye is first installed, the system generates a unique, random password to use when logging in to the web interface. The password is saved in a hidden file in the root directory of the machine: /root/.pwd_icingaweb2_root.
The first time you log in to the NetEye web interface, you will need to insert the following credentials:
User: root
Password: The password you will find inside the file .pwd_icingaweb2_root.
We suggest that you change the root password to a strong one, with at least the following characteristics:
At least six characters long (the more characters, the stronger the password)
A combination of letters, numbers and symbols (@, #, $, %, etc.).
Both uppercase and lowercase letters
To change your password, click on the “gear” icon at the bottom left of NetEye, enter and confirm the new password, then click the “Update Account” button.
Cluster Nodes¶
NetEye 4’s clustering service is based on the RedHat 7 High Availability Clustering technologies:
Corosync: Provides group communication between a set of nodes, application restart upon failure, and a quorum system.
Pacemaker: Provides cluster management, lock management, and fencing.
DRBD: Provides data redundancy by mirroring devices (hard drives, partitions, logical volumes, etc.) between hosts in real time.
Cluster resources are typically quartets consisting of an internal floating IP, a DRBD device, a filesystem, and a (systemd) service.
Once you have installed clustering services according to the information on this page, please turn to the Cluster Architecture page for more information on configuration and how to update.
Prerequisites¶
A NetEye 4 cluster must consist of between 2 and 16 identical servers running CentOS 7. They must satisfy the following requirements:
Networking:
Bonding across NICs must be configured
A dedicated cluster network interface, named exactly the same on each node
One external static IP address which will serve as the external Cluster IP
One IP Address for each cluster node (i.e., N addresses)
One virtual (internal) subnet for internal floating service IPs (this subnet MUST NOT be reachable from any machine except cluster nodes, as it poses a security risk otherwise)
All nodes must know the internal IPs of all other nodes, and be reachable over the internal network (defined in /etc/hosts)
Storage:
At least one volume group with enough free storage to host all service DRBD devices defined in Services.conf
General:
All nodes must have root ssh-keys generated, and must trust each other, with the keys stored in /root/.ssh/authorized_keys
Internet connectivity, including the ability to reach repositories at Würth Phoenix
All nodes must have the yum group ‘neteye’ installed
All nodes must have the latest CentOS 7 and NetEye 4 updates installed
Installation Procedure¶
Depending on the type of nodes you are installing in your cluster, select either of the following procedures.
If your NetEye setup includes satellites, please make sure to carry out the steps in section Satellite Nodes after each node’s installation.
Basic Cluster Install¶
Define the nodes in ClusterSetup.conf (example configuration templates can be found in /usr/share/neteye/cluster/templates/). This guide will assume that you copy/configure your ClusterSetup.conf to/in /usr/share/neteye/scripts/cluster.
Run the cluster setup script to install a basic Corosync/Pacemaker cluster with a floating clusterIP enabled.
Note
In case of any issue which prevents the correct execution
of cluster_base_setup.pl
you can run again the same command
adding the option --force
to override. This will destroy
existing cluster on the nodes.
Note
The password should be treated as a one-time password, and will not be needed after initial setup.
# cd /usr/share/neteye/scripts/cluster
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i <cluster-ip> -s <subnet_cidr> -h neteye.example.com -e <internal interface> -p <very_safe_pw>
# ./cluster_base_setup.pl -c ./ClusterSetup.conf -i 192.0.2.47 -s 24 -h neteye.example.com -e ens224 -p Secret
The standard Würth Phoenix fencing is IPMI Fencing for physical clusters, and vSphere fencing for virtual clusters.
Cluster Service Setup¶
Adjust all necessary IPs, ports, DRBD devices, sizes etc. in all *.tpl.conf files (found in /usr/share/neteye/cluster/templates/). In a typical configuration you need to update only ip_pre which is the the prefix of the IP (e.g. 192.168.1 for 192.168.1.0/24) which will be used to generate the virtual IP for the resource and cidr_netmask which specify the cidr of the internal subnet used by IP resources (e.g. 24 for 192.168.1.0/24).
Run the cluster_service_setup.pl script on each *.tpl.conf file starting from Services-core.tpl.conf:
# cd /usr/share/neteye/scripts/cluster # ./cluster_service_setup.pl -c Services-core.conf.tpl # ./cluster_service_setup.pl -c Services-xxx.conf.tpl # [...]
The
cluster_service_setup.pl
script is designed to report the last command executed in case there were any errors. If you manually fix an error, you will need to remove the successfully configured resource template from Services.conf and rerun that command. Then you should execute thecluster_service_setup.pl
script again as just above in order to finalize the configuration.
NetEye Service Setup¶
Move all resources to a single node by running pcs node standby on all other nodes. This is only a first-time requirement, as many services require local write access during the initial setup procedure.
Run the neteye_secure_install script on the single active node
Run the neteye_secure_install script on every other node
Take all nodes out of standby by running pcs node unstandby –all*
Set up the Director field “API user” on slave nodes (
)
Elasticsearch Only Nodes¶
This section applies only if you have installed the Log Manager module, which contains Elasticsearch.
An Elasticsearch only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node. Please refer to following pages for details:
Please refer to Cluster Configuration Guidelines / Elasticsearch Only Nodes and to Elasticsearch Clusters / Elasticsearch Only Nodes for details
Voting Only Nodes¶
A Voting only node has the same prerequisites and follows the same installation procedure as a standard NetEye cluster node.
To create a voting only node you have to create an entry of type
VotingOnlyNode in the file ClusterSetup.conf
as in the following
example. The usage of ClusterSetup.conf
is explained in
Cluster Installation / Basic Cluster Install
Syntax is similar to the one used for standard Nodes but note that
at most one voting node can be part of the cluster and therefore
a single JSON object is specified instead of an array
"VotingOnlyNode" : {
"addr" : "192.168.47.3",
"hostname" : "neteye03.neteyelocal",
"hostname_ext" : "rdneteye03.si.wp.lan",
"id" : 3
}
Please refer to Cluster Configuration Guidelines / Voting Only Nodes
If you have installed the Log Manager module, which contains Elasticsearch, please refer also to Elasticsearch Clusters / Voting Only Nodes for details
Satellite Nodes¶
Satellite nodes use NATS Server to communicate; the configuration
of NATS server is split into several files. The file
authorization.conf
is regenerated at each restart of nats-server
and contains both users and permissions found in users.d
and
permissions.d
in the /neteye/shared/nats-server/conf
directory. To add users or permissions it suffices to create a new
file in one of the respective directories. For correct syntax and
possibilities, please refer to the official NATS server authorization
documentation
An example permission configuration file
permissions.d/my_permission.conf
could look like this:
PERM_EXAMPLE = {
publish = "example.>"
subscribe = "example.>"
}
An example user configuration file users.d/my_user.conf
could look
like this:
{user: exampleuser, permissions: $PERM_EXAMPLE}
This user would now be able to communicate only on subjects in the
example
namespace. For more information about subject hierarchies
and wildcard notation and subjects please refer to the official
nats-server wildcard
documentation.
Configuring NATS server with Tornado¶
This functionality can be used in a scenario involving tornado on a Master-Satellites installation of NetEye. While each tornado instance on the satellites can work independently on that instance, the use of NATS Server allows tornado to send those data to the Master through independent and secure channels to process them. To reach this result, an account must be defined on each satellite and registered on the Master node. Multiple accounts can coexist on a Satellite though, which send different data flows to the Master, but this scenario is not presented here.
Before proceeding with this configuration, please check that the following requirements are satisfied on each satellite:
The satellite’s
root-ca
certificate is present and trusted by the satellite itself under/root/security/ca/root-ca.crt
Server certificates needed by the nats-server are present under
/neteye/shared/nats-server/conf/certs/
A DNS resolution entry for
nats-server.neteyelocal
is present in/etc/hosts
, i.e., file contains a line:127.0.0.1 nats-server.neteyelocal
The three configuration files for this scenario are already shipped and
installed under /neteye/shared/nats-server/conf/
. However, they are
the default configuration for the local NATS Server and need to be
edited to allow communication with the Master. The starting point is to
generate the certificates to encrypt the communication:
From the master node, generate a client certificate for each satellite
<satellite1>
with the following command. Replace values<satellite1>
and/C=IT/ST=Bolzano/L=Bolzano/O=Global Security/OU=Neteye/CN=<satellite1>
according to satellite and your organisation. Please choose a meaningful name for the satellite<satellite1>
and stick with it:/bin/bash /usr/share/neteye/scripts/security/generate_client_certs.sh "<satellite1>" "/C=IT/ST=Bolzano/L=Bolzano/O=Global Security/OU=Neteye/CN=<satellite1>" "./"
Copy the certs folder just generated inside the satellite to folder:
/neteye/shared/nats-server/conf/certs/
Copy the
/root/security/ca/root-ca.crt
file of the master node inside the satellite, in the path:/neteye/shared/nats-server/conf/certs/root-ca.crt
Set the owner of the certs folder and of all its file to the system user
nats
, for the NATS server to be able to read the certificates:chown -R nats:nats /neteye/shared/nats-server/conf/certs/
These certificates are picked up by the NATS leaf node configuration of
the satellite contained in nats-leaf.conf
, but the following changes
need to be carried out manually:
On the satellite, in file nats-leaf.conf
, the correct URL of the
Master must be provided and include the satellite’s name and the IP
address (or hostname) of the Master. Also the port must be changed in
case it was modified on the Master from the standard 7422. The path
to the certificates used by the leafnode must be adjusted by
substituting <satellite1>
with the name chosen for the satellite:
url: "nats-leaf://<satellite1>@<MASTER_IP>:7422"
tls: {
cert_file: "/neteye/shared/nats-server/conf/certs/<satellite1>.crt.pem"
key_file: "/neteye/shared/nats-server/conf/certs/private/<satellite1>.key.pem"
# This is the root-ca.crt of the master node
ca_file: "/neteye/shared/nats-server/conf/certs/root-ca.crt"
verify: true
}
On the master, the main configuration file is multi-tenancy.conf
and
contains accounts, authorisation and collectors for the local NATS
Server. This file should be edited to include the configuration of the
satellites, shown in these excerpts:
authorization: {
users = [
{user: <satellite1>, account: SATELLITE1}
{user: <satellite2>, account: SATELLITE2}
]
}
accounts: {
SATELLITE1: {
users: [
{user: <satellite1>}
]
exports: [
{stream: tornado.events}
{stream: telegraf}
]
},
Finally, file nats-server.conf
needs also a slight change:
comment/uncomment the correct include
, depending if it is on the
Master or Satellite:
Master:
#include ./authorization.conf ## By including this file the nats-server will run as a leaf node (the file needs to be adjusted with your configuration) # include ./nats-leaf.conf ## Including this file permits to configure connections from leaf nodes, which allows for multi-tenancy include ./multi-tenancy.conf .. warning:: By commenting out ``#include ./authorization.conf``, we are not using anymore permissions defined for users inside ``users.d`` and ``permisssions.d``. We need to do this to avoid unwanted behaviours for NATS users belonging to accounts, but still having some permissions defined in ``authorization``. So, if you configured custom permissions for NATS users not belonging to any account, you need to redefine them directly inside the ``nats-server.conf`` file (or in a separate file, then included in ``nats-server.conf``)
Satellite:
include ./authorization.conf ## By including this file the nats-server will run as a leaf node (the file needs to be adjusted with your configuration) include ./nats-leaf.conf ## Including this file permits to configure connections from leaf nodes, which allows for multi-tenancy # include ./multi-tenancy.conf