Requirements¶
This section lists all the requirements that must be satisfied to install NetEye and is organized in these parts:
Requirements for a Node presents the requirements for the installation of a NetEye Single Node and Satellite Node, and for each Cluster Node. Moreover, also supported hypervisors are listed with some requirements
Cluster Requirements and Best Practices is a conversational section that introduces and describes general guidelines and best practices that should be taken into account when designing a new cluster infrastructure
NetEye Satellite Requirements contains a list of requirements that need to be satisfied on the Satellite Nodes in addition to the ones described in Requirements for a Node
TCP and UDP Ports Requirements list all the TCP and UDP ports that should be opened to allow flawless functioning of a NetEye installation, separated into system ports and module-specific ports
Additional Software Installation explains the best practices on how the NetEye Administrator should manage additional software on the NetEye Nodes
Requirements for a Node¶
This section lists hardware and hypervisor requirements to install NetEye. The system requirement are intended for a Single Node, a Satellite Node, and each Cluster Node, and for both physical and virtual installations.
System Requirements¶
Table 3 gives an overview of the basic system requirements for a Node in both a testing and a production environment. Depending on the services activated and the load on the system, requirements might need to be raised. Indeed, running resource-intensive services like the SIEM or ITOA would require to increase all the requirements. Moreover, also disk space may become an issue when the amount of logs produced by a NetEye installation, by its monitored objects, or by both is large.
You can always contact the official channels–sales, consultants, or support portal– for advices on how to tailor the system according to the needs.
Requirement |
Demo or testing environment |
Production environment |
---|---|---|
# of CPUs |
2 cores |
4 cores |
RAM |
8Gb |
16Gb |
Hard disk |
60Gb |
120Gb |
Starting from version 4.23, NetEye is based on RHEL8, which requires a license that is provided by NetEye sales or consultants. Since this license is necessary to launch neteye_secure_install during the installation procedure, make sure you have it before starting the installation.
If your NetEye Node does not have direct access to Internet and needs instead to pass through a proxy to reach the Internet, then you need to configure the software running on NetEye to pass through this proxy, as explained in Section Nodes behind a Proxy.
Moreover, also the following domains must be reachable from each Node, to allow for updates and licence verification:
Domain |
Port |
---|---|
cdn.redhat.com |
443 TCP |
cdn-ubi.redhat.com |
443 TCP |
cert-api.access.redhat.com |
443 TCP |
cert.cloud.redhat.com |
443 TCP |
subscription.rhsm.redhat.com |
443 TCP |
Supported Virtualization Environments¶
NetEye installation is supported in the following virtualization environments. For each one are listed some options that need to he configured during installation.
VMware. Select as ESXi 6.7 and Later as Compatibility, then VMware Paravirtual as SCSI controller, and finally either SATA or SCSI.
KVM. In Boot Options check that Disk1 and CDRom are both selected, then change the disk bus to SATA (VirIO Disk 1 under Advanced options in the next configuration step).
HyperV. No particular option is required.
Cluster Requirements and Best Practices¶
This section focuses mostly on best practices for a NetEye deployment in a cluster environment, since system requirements for each Cluster Node correspond to those for a Single Node.
These guidelines are subject to change and should not be considered as hard requirements, because they may vary significantly depending on the running services and logging level.
The design of a network infrastructure in which NetEye is involved should be carefully designed in order to take advantage all of its functionalities, especially in the case of a particularly complex setup, in which the experience of a NetEye specialist can prove useful. To get in touch with one of them, please contact our team.
Cluster Networking Requirements¶
This section illustrates in details the requirements and their rationale for all networking involving a NetEye Cluster: inbound, outbound (“Corporate Network”), and among the nodes composing the cluster (called “intra-cluster communication” or “Private (Heartbeat) Network” in the remainder).
The remainder of this section is therefore rather conversational, to summarize the content, we point out a few good practices:
setting up a (NetEye) Cluster requires a dedicated network for intra-cluster communication, separated from the Corporate Network
intra-cluster communication should be allowed freely without limitations
Each NetEye Cluster node should have its own IP Address in the Private Network
Corporate Network¶
Configuring the NetEye Cluster and allowing communication between Cluster and Corporate Network impacts several parts of networking and requires to open a number of ports. Key concepts and points to focus on include:
- Network Layer: Monitoring and Management Network
This network will be used by NetEye to collect monitoring and performance data, system logs and allow access to:
NetEye Web interface
Each node SSH interface
Any other running services
The bottom line for this network is that it must be able to access–and must be reached by–every system that needs to be monitored by NetEye.
- Network Link
Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration.
- IP Addresses: Physical node
A dedicated IP address for each node. Each IP should be in the same network segment. This IP is used both for management tasks and active (from NetEye to devices) monitoring.
- IP Addresses: Management (iDRAC)
A dedicated IP address for the management interface of each node.
- Cluster Virtual IP Address
One IP address used by the clustered system to allow monitoring and management from the public network
Depending on the services enabled on the NetEye Cluster, a number of ports must be used for the communication flow with the Corporate Network.
In general, Satellite Nodes, while they are NetEye instances, do not need to respect all these requirements. Indeed, Satellite Nodes already communicate securely with the NetEye Master node using NATS/Tornado. Moreover, the purpose of Satellite Nodes is to monitor the infrastructure and collect data, therefore they only need to allow traffic for NATS (Master/Satellite communication), Icinga (monitoring), and Elastic (EBP and related services).
Private (Heartbeat) Network¶
Intra-cluster communication should be usually freely allowed. Key concepts and points to focus on include:
- Network Layer: Internal Communication Network
This network will be used for internal communication between each NetEye service. NetEye cluster nodes should be able to talk to each other without restriction. For security reasons, you should not share this network with other systems.
- Network Link
Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration. Ensure inter-node, round-trip latency between each node is less than 300ms, with a target of 2ms as optimal, as stated in the RHEL Corosync documentation.”
- IP Addresses
Internal services running on a NetEye Cluster with all modules installed require at least 30 IP Addresses. It is therefore strongly recommended to always configure a dedicated /24 network (e.g., 172.20.12.0/24) to avoid running out of available IPs and being forced to reconfigure the whole network if the cluster is expanded.”
Note
None of these IPs should be publicly exposed, because they are used only by services running on the NetEye cluster.
NetEye Satellite Requirements¶
A Satellite is a NetEye instance which depends on a main NetEye installation (either Single Node or Cluster), called Master, and carries out tasks such as:
execute Icinga 2 checks and forward results to the Master
collect logs and forward them to the Master
forward data through NATS
collect data through Tornado Collectors and forward them to the Master to be processed by Tornado
Besides those mentioned in Requirements for a Node, there are a few other requirements that a satellite must satisfy:
It is required that both the Master and the Satellite be equipped with the same NetEye version
The NATS connection between Master and Satellite is always initiated by the Satellite, so please ensure that the Networking Requirements for NATS Leaf Nodes are satisfied
If you are in a NetEye Cluster environment, check that all resource are in started status before proceeding with the Satellite configuration procedure
TCP and UDP Ports Requirements¶
This section contains a list of TCP and UDP ports that should be opened on the Corporate Network to allow NetEye to operate correctly. These requirements apply on both NetEye Single Node and Cluster installations, except for cluster-specific ports.
For security reasons, we suggest to open only the ports used by the running services and close everything else.
Note
All ports are listed with their default values as assigned by IANA or by the respective software producers.
System Ports¶
These port should be always opened, because they refer to basic functionalities of a cluster.
Protocol/Port |
Service |
Description |
---|---|---|
RMCP TCP 5900 |
iDRAC Access |
Systems that need to manage a node via iDRAC should reach each Management IP Address on iDRAC dedicated ports. Please refer to Dell’s Support Documentation to understand the required ports. |
TCP 80, 443 |
NetEye Management Interface and System Updates |
Systems used to manage NetEye should reach the Cluster Virtual IP via HTTP/S. |
TCP 22 |
Node SSH Console |
Systems used to manage deep NetEye configuration and node configuration should reach every Physical Node IP via SSH. |
TCP 25,465 |
SMTP Outbound |
To allow sending of notifications, the required ports for SMTP outbound should be allowed from each Physical Node IP to the selected SMTP Relay Server. |
UDP 123 |
NTP |
Each node should be able to reach the official internal time source server with NTP Protocol. |
TCP 389, 3268 |
LDAP Authentication and Authorization |
To allow your Active Directory user accounts the ability to access NetEye, each node must be able to contact at least one DC on both ports 389 (LDAP) and 3268 (Global Catalog). To allow your LDAP user account the ability to access NetEye, each node must be able to contact your LDAP Source on port 389 (or the Port of your choice). |
TCP 7422 |
NATS Leaf Nodes |
The NATS Leaf Nodes are configured to talk to the NATS Server of the NetEye Master. |
The ports in Table 5 include the cluster requirements specified by RedHat.
Protocol/Port |
Required for |
Description |
---|---|---|
UDP 623 |
iDRAC fencing |
|
TCP 2224 |
Node-to-node communication |
It is required to open port 2224 on each node to allow pcs to talk from any node to all nodes in the cluster, including itself. [[1]] |
TCP 2347 |
neteye-agent service. |
|
TCP 3000 |
Grafana |
|
TCP 3121 |
Pacemaker Remote nodes |
Required on all nodes if the cluster has any Pacemaker Remote nodes. [[2]] |
TCP 3306 |
MariaDB |
|
TCP 4748 |
Tornado API |
Communication with Tornado API from the GUI and for testing. |
TCP 5403 |
Quorum device host |
Required on the quorum device host when using a quorum device with corosync-qnetd. [[3]] |
TCP 5404 |
Corosync multicast UDP |
Required on corosync nodes if corosync is configured for multicast UDP. |
TCP 5405, 5406 |
Required on all corosync nodes |
|
TCP 5664 |
Icinga 2 |
Required by Icinga 2 for intra-cluster communication [[4]] |
TCP 7788-7799 |
DRBD |
Port range may be extended as new resources or services are added. |
TCP 8086 |
InfluxDB |
|
TCP 8000 |
Lampo |
Table Notes:
Monitoring Requirements¶
Monitoring should never be carried out on the private (heartbeat) cluster network.
At present, the NetEye Cluster’s Virtual IP is used for passive monitoring (i.e., by devices autonomously sending information to NetEye) and agent deployment, while the Physical Node’s IP is used for active monitoring (i.e., requests from NetEye to devices).
We distinguish the following types of monitoring:
Active monitoring through
ICMP
consisting of directICMP
requests from NetEye to monitored devicesActive monitoring through
SNMP
is similar to previous, but using theSNMP
protocol in spite ofICMP
Passive monitoring through
SNMP
uses SNMP trap events sent from monitored devices to NetEyeMail-based monitoring is based on emails sent by devices or users to NetEye that trigger specific events
Active and passive monitoring have different requirements in terms of ports. Moreover, also the operating system installed on the devices to be monitored influences the ports to be opened; all are reported in Table 6. Depending on the monitoring tasks activated, additional considerations are described in section Additional Remarks for Monitoring.
Protocol/Port |
Description |
Monitoring |
---|---|---|
ICMP |
Test via ping to check if a host is alive |
Active/Passive |
TCP 4222, 4244 |
(APM) |
Active/Passive |
TCP 5001 |
plugin check_iperf |
Passive |
TCP 5665 |
server monitoring (ICINGA2 protocol) |
Active |
UDP 161 |
Device/server monitoring (SNMP protocol) |
Active |
UDP 162 |
TRAP SNMP |
Passive |
TCP 135 |
Windows server monitoring (WMI protocol) and Windows admin user (more ports are required) |
Active, Windows devices only |
TCP 22 |
Linux Server monitoring (SSH protocol with check_by_ssh) |
Active, Linux devices only |
Additional Remarks for Monitoring¶
Depending on the services enabled on the cluster, take into account the following:
For Sahi and/or check_webpage, create a dedicated user account if required.
Enable the SNMP v2c protocol and community on all servers and devices.
Enable all TCP and UDP ports needed for specific monitoring requirements, such as check_tcp and/or check_udp for network service ports like: 53 (DNS), 123 (NTP), 3306 (MySQL), etc. For a full list of reserved ports, you can consult this website.
You may need to contact your NetEye 4 consultant for the following requirements:
Create a database monitoring user, where the rights granted will depend on the database’s vendor
Create a user on HyperV systems
Allow connections between NetEye 4 and all VLANs/Subnets involved in monitoring
LDAP Access Requirements¶
In order to log in to NetEye 4 with a centralized account, create an LDAP/AD user with read permissions on the following tree objects:
Account name
Password
Email address
You will also need to open the following TCP ports from NetEye 4 to the LDAP system directory:
Port |
Description |
---|---|
TCP 389 |
LDAP/AD domain-specific information |
TCP 636 |
LDAP/AD domain-specific information encrypted over SSL |
TCP 3268 |
LDAP queries via Global Catalog |
TCP 3269 |
LDAP queries via Global Catalog encrypted over SSL |
Notification Requirements¶
Notifications are sent via SMTP or SMS, therefore requirements are related to these NetEye’s modules.
Relay all email sent to eventgw@domain on your SMTP server to the NetEye 4 Event Handler
In order to send SMS messages, unset the PIN on your SIM card
We provide two types of modem:
SMS Gateway connected over Ethernet
SMS Gateway connected via serial bus (contact your NetEye 4’s consultant for further information)
Individual Module Requirements¶
Individual NetEye modules may have their own specific requirements that will need to be taken into consideration if they are enabled. In particular, when configuring cluster nodes, you should also make sure that the following requirements are included for each node.
Log management
The following ports need to be opened to receive log data.
Port |
Service |
---|---|
TCP/UDP 514 |
syslog/rsyslog |
TCP 6161 |
syslog/splunk |
ntopng
The following ports must be opened in order to allow the communication between ntopng, nProbe, and Redis.
Port |
Service/Description |
---|---|
TCP 5556 |
zmq |
TCP 6363 |
nProbe (Netflow collector) |
TCP 6379 |
Redis |
SIEM
The SIEM module requires Log management module to work, therefore, besides ports listed in Table 8, these additional ports are needed.
Port |
Description |
---|---|
UDP 2055 |
Netflow listening port (Netflow protocol) |
TCP 4950 |
El Proxy |
TCP 5044 |
Logstash input for Beats |
TCP 5061 |
Kibana |
TCP 9200 |
Elasticsearch |
Note
Port 9200 should be opened if there are Satellite Nodes that send data for the Elasticsearch service
SLM
The SLM Daemon needs a dedicated port to operate correctly.
Port |
Description |
---|---|
TCP 4949 |
SLM daemon |
Single Purpose Nodes¶
Elastic-only nodes work only as part of Elasticsearch cluster and communicate on the private (heartbeat) network, therefore they do not expose any ports required by other services.
Voting-only nodes only provide quorum to several components of NetEye cluster: DRBD, PCS, and Elasticsearch. Like Elastic-only nodes, they do not expose any service and communicate with other cluster nodes on the private (heartbeat) cluster network; therefore no port should be explicitly opened.
Additional Software Installation¶
To satisfy particular use cases you may have the necessity to use software that is not pre-installed on the NetEye Nodes. To ensure that this software and all its dependencies are automatically managed by the system, please install and manage it only via the DNF RPM package manager.
Warning
It is strongly recommended to avoid installing software in any other way.
For example, installing a Python module via pip will not let the system manage the module and keep track of its dependencies during updates, which may lead to the module being outdated and its dependencies being broken. In this case, instead of installing the Python module via pip, please find an RPM package that provides the module and install it via DNF.