Requirements¶
This section lists all the requirements that must be satisfied to install NetEye and is organized in these parts:
Requirements for a Node presents the requirements for the installation of a NetEye Single Node and Satellite Node, and for each Cluster Node. Moreover, also supported hypervisors are listed with some requirements
Cluster Requirements and Best Practices is a conversational section that introduces and describes general guidelines and best practices that should be taken into account when designing a new cluster infrastructure
NetEye Satellite Requirements contains a list of requirements that need to be satisfied on the Satellite Nodes in addition to the ones described in Requirements for a Node
TCP and UDP Ports Requirements list all the TCP and UDP ports that should be opened to allow flawless functioning of a NetEye installation, separated into system ports and module-specific ports
Additional Software Installation explains the best practices on how the NetEye Administrator should manage additional software on the NetEye Nodes
Requirements for a Node¶
This section lists hardware and hypervisor requirements to install NetEye. The system requirement are intended for a Single Node, a Satellite Node, and each Cluster Node, and for both physical and virtual installations.
System Requirements¶
Table 3 gives an overview of the basic system requirements for a Node in both a testing and a production environment. Depending on the services activated and the load on the system, requirements might need to be raised. Indeed, running resource-intensive services like the SIEM or ITOA would require to increase all the requirements. Moreover, also disk space may become an issue when the amount of logs produced by a NetEye installation, by its monitored objects, or by both is large.
You can always contact the official channels–sales, consultants, or support portal– for advices on how to tailor the system according to the needs.
Requirement |
Demo or testing environment |
Production environment |
---|---|---|
# of CPUs |
2 cores |
4 cores |
RAM |
8Gb |
16Gb |
Hard disk |
60Gb |
120Gb |
Starting from version 4.23, NetEye is based on RHEL8, which requires a license that is provided by NetEye sales or consultants. Since this license is necessary to launch neteye install during the installation procedure, make sure you have it before starting the installation.
If your NetEye Node does not have direct access to Internet and needs instead to pass through a proxy to reach the Internet, then you need to configure the software running on NetEye to pass through this proxy, as explained in Section Nodes behind a Proxy.
Moreover, the following domains must be reachable from each Node, to allow for updates and licence verification:
Domain |
Port |
Intended Use |
---|---|---|
repo.wuerth-phoenix.com |
443 TCP |
Würth Phoenix repository for NetEye update/upgrade |
api.neteye.cloud |
443 TCP |
Würth Phoenix API used during NetEye update/upgrade |
cdn.redhat.com |
443 TCP |
RedHat subscription/packages |
cdn-ubi.redhat.com |
443 TCP |
RedHat subscription/packages |
cert-api.access.redhat.com |
443 TCP |
RedHat subscription/packages |
cert.cloud.redhat.com |
443 TCP |
RedHat subscription/packages |
subscription.rhsm.redhat.com |
443 TCP |
RedHat subscription/packages |
mirrors.fedoraproject.org |
443 TCP |
Provides a set of additional packages for RHEL |
linux.dell.com |
443 TCP |
DELL packages (only for physical DELL machines) |
The following domains may prove to be useful and simplify working with NetEye:
Domain |
Port |
Intended Use |
---|---|---|
bitbucket.org |
443 TCP |
Download customized script and plugin (often used by NetEye consultants) |
grafana.com |
443 TCP |
Download Grafana plugins, panels and datasources |
yum.centreon.com |
443 TCP |
Download Centreon plugins for monitoring |
Supported Virtualization Environments¶
NetEye installation is supported in the following virtualization environments. For each one are listed some options that need to he configured during installation.
VMware. Select as ESXi 6.7 and Later as Compatibility, then VMware Paravirtual as SCSI controller, and finally either SATA or SCSI.
KVM. In Boot Options check that Disk1 and CDRom are both selected, then change the disk bus to SATA (VirIO Disk 1 under Advanced options in the next configuration step).
HyperV. No particular option is required.
LDAP Access Requirements¶
NetEye 4 is using LDAP protocol for the purpose of binding users within Active Directory, OpenLDAP, to a centralized account. Hence, NetEye adheres to the LDAP Standards Track.
In order to log in to NetEye 4 with a centralized account, create an LDAP/AD user with read permissions on the following objects:
Account name
Password
Email address
You will also need to open several TCP ports from NetEye 4 to the LDAP system directory.
Notification Requirements¶
Notifications can be sent via SMTP or SMS, therefore the following requirements should be satisfied.
To send notifications via SMTP you need an SMTP Relay Server, which should be reachable by NetEye Nodes as described here
In order to send SMS messages, unset the PIN on your SIM card
To handle SMS we provide two types of modem:
SMS Gateway connected over Ethernet
SMS Gateway connected via serial bus (contact your NetEye 4’s consultant for further information)
Cluster Requirements and Best Practices¶
This section focuses mostly on best practices for a NetEye deployment in a cluster environment, since system requirements for each Cluster Node correspond to those for a Single Node.
These guidelines are subject to change and should not be considered as hard requirements, because they may vary significantly depending on the running services and logging level.
The design of a network infrastructure in which NetEye is involved should be carefully designed in order to take advantage all of its functionalities, especially in the case of a particularly complex setup, in which the experience of a NetEye specialist can prove useful. To get in touch with one of them, please contact our team.
Cluster Networking Requirements¶
This section illustrates in details the requirements and their rationale for all networking involving a NetEye Cluster: inbound, outbound (“Corporate Network”), and among the nodes composing the cluster (called “intra-cluster communication” or “Private (Heartbeat) Network” in the remainder).
The remainder of this section is therefore rather conversational, to summarize the content, we point out a few good practices:
setting up a (NetEye) Cluster requires a dedicated network for intra-cluster communication, separated from the Corporate Network
intra-cluster communication should be allowed freely without limitations
Each NetEye Cluster node should have its own IP Address in the Private Network
Corporate Network¶
Configuring the NetEye Cluster and allowing communication between Cluster and Corporate Network impacts several parts of networking and requires to open a number of ports. Key concepts and points to focus on include:
- Network Layer: Monitoring and Management Network
This network will be used by NetEye to collect monitoring and performance data, system logs and allow access to:
NetEye Web interface
Each node SSH interface
Any other running services
The bottom line for this network is that it must be able to access–and must be reached by–every system that needs to be monitored by NetEye.
- Network Link
Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration.
- IP Addresses: Physical node
A dedicated IP address for each node. Each IP should be in the same network segment. This IP is used both for management tasks and active (from NetEye to devices) monitoring.
- IP Addresses: Management (iDRAC)
A dedicated IP address for the management interface of each node.
- Cluster Virtual IP Address
One IP address used by the clustered system to allow monitoring and management from the public network
Depending on the services enabled on the NetEye Cluster, a number of ports must be used for the communication flow with the Corporate Network.
In general, Satellite Nodes, while they are NetEye instances, do not need to respect all these requirements. Indeed, Satellite Nodes already communicate securely with the NetEye Master node using NATS/Tornado. Moreover, the purpose of Satellite Nodes is to monitor the infrastructure and collect data, therefore they only need to allow traffic for NATS (Master/Satellite communication), Icinga (monitoring), and Elastic (EBP and related services).
Private (Heartbeat) Network¶
Intra-cluster communication should be usually freely allowed. Key concepts and points to focus on include:
- Network Layer: Internal Communication Network
This network will be used for internal communication between each NetEye service. NetEye cluster nodes should be able to talk to each other without restriction. For security reasons, you should not share this network with other systems.
- Network Link
Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration. Ensure inter-node, round-trip latency between each node is less than 300ms, with a target of 2ms as optimal, as stated in the RHEL Corosync documentation.”
- IP Addresses
Internal services running on a NetEye Cluster with all modules installed require at least 30 IP Addresses. It is therefore strongly recommended to always configure a dedicated /24 network (e.g., 172.20.12.0/24) to avoid running out of available IPs and being forced to reconfigure the whole network if the cluster is expanded.”
Note
None of these IPs should be publicly exposed, because they are used only by services running on the NetEye cluster.
NetEye Satellite Requirements¶
A Satellite is a NetEye instance which depends on a main NetEye installation (either Single Node or Cluster), called Master, and carries out tasks such as:
execute Icinga 2 checks and forward results to the Master
collect logs and forward them to the Master
forward data through NATS
collect data through Tornado Collectors and forward them to the Master to be processed by Tornado
Besides those mentioned in Requirements for a Node, there are a few other requirements that a satellite must satisfy:
It is required that both the Master and the Satellite be equipped with the same NetEye version
The NATS connection between Master and Satellite is always initiated by the Satellite, so please ensure that the Networking Requirements for NATS Leaf Nodes are satisfied
If you are in a NetEye Cluster environment, check that all resource are in started status before proceeding with the Satellite configuration procedure
TCP and UDP Ports Requirements¶
This section contains a list of TCP and UDP ports that should be opened on Corporate Network and/or Private (Heartbeat) Network to allow NetEye to operate correctly. These requirements apply on both NetEye Single Node and Cluster installations, except for cluster-specific ports.
It is important to remember that Private (heartbeat) Network should not be directly accessible from external networks. For security reasons, we suggest to open only the ports used by the running services and close everything else.
Note
All ports are listed with their default values as assigned by IANA or by the respective software producers.
System Ports¶
Make sure the following system ports are always opened, because they refer to basic functionalities of NetEye. The ports listed in Table 4 are to be opened on a Corporate Network in order to allow its communication with the NetEye.
Additionally, the communication between the NetEye and Corporate Network should be built with respect to the NetEye architecture, which means selected ports are to be opened on the Master Node or its Satellite.
Protocol/Port |
Service |
Instance |
Description |
---|---|---|---|
RMCP TCP 5900 |
iDRAC Access, Inbound |
Master, Satellite |
Systems that need to manage a node via iDRAC should reach each Management IP Address on iDRAC dedicated ports. Please refer to Dell’s Support Documentation to understand the required ports. |
TCP 80, 443 |
NetEye Management Interface and System Updates, Inbound |
Master, Satellite |
Systems used to manage NetEye should reach the Cluster Virtual IP via HTTP/S. Satellites use those ports to receive data from agents. |
TCP 22 |
Node SSH Console, Inbound |
Master, Satellite |
Systems used to manage deep NetEye configuration and node configuration should reach every Physical Node IP via SSH. |
TCP 25,465 |
SMTP, Oubound |
Master |
To allow sending of notifications, the required ports for SMTP outbound should be allowed from each Physical Node IP to the selected SMTP Relay Server. If the Icinga2 notification feature is enabled on a Satellite as well, same ports should be opened on the latter. |
UDP 123 |
NTP, Outbound |
Master, Satellite |
Each node should be able to reach the official internal time source server with NTP Protocol. |
TCP 636, 3269 |
LDAP Authentication and Authorization, Outbound |
Master |
To allow your Active Directory user accounts the ability to access NetEye, each node must be able to contact at least one DC on both ports 636 (LDAP) and 3269 (Global Catalog) encrypted over SSL. To allow your LDAP user account the ability to access NetEye, each node must be able to contact your LDAP Source on port 636 (or the Port of your choice). |
TCP 7422 |
NATS Leaf Nodes |
Master (Inbound), Satellite (Outbound) |
Satellites should be able to reach the NetEye Master NATS Leaf Node port in order to send data generated on the Satellite to the Master. |
TCP 4222 |
NATS Server |
Master (Inbound), Satellite |
In order to send data from a NATS Client (e.g. a Telegraf) directly to NetEye, port 4222 should be opened (NATS TCP port). |
The ports in Table 5 should be opened on the Private (heartbeat) Network and include the cluster requirements specified by RedHat.
Protocol/Port |
Required for |
Description |
---|---|---|
UDP 623 |
iDRAC fencing |
|
TCP 2224 |
Node-to-node communication |
It is required to open port 2224 on each node to allow pcs to talk from any node to all nodes in the cluster, including itself. [[1]] |
TCP 2347 |
neteye-agent service. |
|
TCP 3000 |
Grafana |
|
TCP 3121 |
Pacemaker Remote nodes |
Required on all nodes if the cluster has any Pacemaker Remote nodes. [[2]] |
TCP 3306 |
MariaDB |
|
TCP 4748 |
Tornado API |
Communication with Tornado API from the GUI and for testing. |
TCP 5403 |
Quorum device host |
Required on the quorum device host when using a quorum device with corosync-qnetd. [[3]] |
TCP 5404 |
Corosync multicast UDP |
Required on corosync nodes if corosync is configured for multicast UDP. |
TCP 5405, 5406 |
Required on all corosync nodes |
|
TCP 5664 |
Icinga 2 |
Required by Icinga 2 for intra-cluster communication [[4]] |
TCP 7788-7799 |
DRBD |
Port range may be extended as new resources or services are added. |
TCP 8086 |
InfluxDB |
|
TCP 8000 |
Lampo |
Table Notes:
Monitoring Requirements¶
Monitoring should never be carried out on the private (heartbeat) cluster network.
At present, the NetEye Cluster’s Virtual IP is used for passive monitoring (i.e., by devices autonomously sending information to NetEye) and agent deployment, while the Physical Node’s IP is used for active monitoring (i.e., requests from NetEye to devices).
We distinguish the following types of monitoring:
Active monitoring through
ICMP
consisting of directICMP
requests from NetEye to monitored devicesActive monitoring through
SNMP
is similar to previous, but using theSNMP
protocol in spite ofICMP
Passive monitoring through
SNMP
uses SNMP trap events sent from monitored devices to NetEyeMail-based monitoring is based on emails sent by devices or users to NetEye that trigger specific events
The following monitoring requirements apply to the server that is to be monitored. Active and passive monitoring have different requirements in terms of ports. Moreover, also the operating system installed on the devices to be monitored influences the ports to be opened; all are reported in Table 6. Depending on the monitoring tasks activated, additional considerations are described in section Additional Remarks for Monitoring.
Protocol/Port |
Description |
Monitoring |
---|---|---|
ICMP |
Test via ping to check if a host is alive |
Active/Passive |
TCP 4222, 4244 |
(APM) |
Active/Passive |
TCP 5001 |
plugin check_iperf |
Passive |
TCP 5665 |
server monitoring (ICINGA2 protocol) |
Active |
UDP 161 |
Device/server monitoring (SNMP protocol) |
Active |
UDP 162 |
TRAP SNMP |
Passive |
TCP 135 |
Windows server monitoring (WMI protocol) and Windows admin user (more ports are required) |
Active, Windows devices only |
TCP 22 |
Linux Server monitoring (SSH protocol with check_by_ssh) |
Active, Linux devices only |
Additional Remarks for Monitoring¶
Depending on the services enabled on the cluster, take into account the following:
For Sahi and/or check_webpage, create a dedicated user account if required.
Enable the SNMP v2c protocol and community on all servers and devices.
Enable all TCP and UDP ports needed for specific monitoring requirements, such as check_tcp and/or check_udp for network service ports like: 53 (DNS), 123 (NTP), 3306 (MySQL), etc. For a full list of reserved ports, you can consult this website.
You may need to contact your NetEye 4 consultant for the following requirements:
Create a database monitoring user, where the rights granted will depend on the database’s vendor
Create a user on HyperV systems
Allow connections between NetEye 4 and all VLANs/Subnets involved in monitoring
Individual Module Requirements¶
Individual NetEye modules may have their own specific requirements that will need to be taken into consideration if a particular Module is to be enabled. When configuring cluster nodes, you should also make sure that the following requirements are included for each node.
Note
Please pay attention to the type of Network - Corporate or Private - each port requirement applies to.
ntopng
The following ports must be opened on the NetEye Master side in order to allow the communication between ntopng, nProbe, and Redis. The ports are inbound.
Port |
Service/Description |
---|---|
TCP 5556 |
zmq (nProbe client) |
TCP 6363 |
nProbe (Netflow collector) |
Port |
Service/Description |
---|---|
TCP 6379 |
Redis |
SIEM
The following ports need to be opened either on the part of Corporate or Private (heartbeat) Network to be able to receive, process and store log data. Please note that all the ports are inbound and refer to the Master instance only.
Port |
Description |
---|---|
TCP/UDP 514 |
syslog/rsyslog |
TCP 6161 |
syslog/splunk |
UDP 2055 |
Netflow listening port (Netflow protocol) |
TCP 5044 |
Logstash input for Beats |
TCP 5045 |
Logstash input for Elastic Agent |
TCP 9200 |
Elasticsearch |
TCP 8220 |
Fleet Server |
TCP 8200 |
APM Server |
Note
Port 9200 should be opened if there are Satellite Nodes that send data for the Elasticsearch service
Port |
Description |
---|---|
TCP 4950 |
El Proxy |
TCP 5061 |
Kibana |
Moreover, the following domains should be reachable for ensuring correct functioning of your Elastic Stack installation:
Domain |
Port |
Intended Use |
---|---|---|
epr.elastic.co |
443 TCP |
Elastic Package Registry (mandatory in all SIEM installations) |
geoip.elastic.co |
443 TCP |
Elastic GeoIP endpoint |
storage.googleapis.com |
443 TCP |
GeoLite2 City, GeoLite2 Country, and GeoLite2 ASN GeoIP2 databases used by Elastic GeoIP processor |
SLM
The SLM Daemon needs a dedicated inbound port to be opened on the Master instance to operate correctly. The requirement refers to the Corporate Network.
Port |
Description |
---|---|
TCP 4949 |
SLM daemon |
Alyvix
In order for Alyvix Service to successully communicate with NetEye the following port should be opened on a Corporate Network, relative to both Master and Satellite instances.
Port |
Description |
---|---|
TCP 4222 |
While TCP 4222 grants Inbound connection from Alyvix Service to NetEye, TCP 443 should also be opened to allow Outbound connection from NetEye to Alyvix Service |
Single Purpose Nodes¶
Elastic-only nodes work only as part of Elasticsearch cluster and communicate on the private (heartbeat) network, therefore they do not expose any ports required by other services.
Voting-only nodes only provide quorum to several components of NetEye cluster: DRBD, PCS, and Elasticsearch. Like Elastic-only nodes, they do not expose any service and communicate with other cluster nodes on the private (heartbeat) cluster network; therefore no port should be explicitly opened.
Additional Software Installation¶
To satisfy particular use cases you may have the necessity to use software that is not pre-installed on the NetEye Nodes. To ensure that this software and all its dependencies are automatically managed by the system, please install and manage it only via the DNF RPM package manager.
Warning
It is strongly recommended to avoid installing software in any other way.
For example, installing a Python module via pip will not let the system manage the module and keep track of its dependencies during updates, which may lead to the module being outdated and its dependencies being broken. In this case, instead of installing the Python module via pip, please find an RPM package that provides the module and install it via DNF.