User Guide

Requirements

This section lists all the requirements for both NetEye and the individual modules.

Cluster Configuration Guidelines

The configuration descriptions below are generic guidelines that you should take into consideration when configuring NetEye 4 in a cluster environment. They are subject to change and should not be considered as hard requirements.

The design of a network infrastructure in which NetEye is involved should be carefully designed in order to exploit all of its functionalities.

This is especially true in the case of a particularly complex setup in which the experience of a NetEye specialist can prove useful. To get in touch with one of them, please contact our team.

Cluster Networking Requirements

Table 3 Network Layout Requirements

Corporate Network

Private (Heartbeat) Network

Monitoring and Management Network

This network will be used by NetEye to collect monitoring and performance data, system logs and allow access to:

  • NetEye Web interface

  • Each node SSH interface

  • Other eventual services

This network must reach every system that needs to be monitored and should be reached by every system that needs to manage the NetEye Infrastructure.

Internal Communication Network

This network will be used for internal communication between each NetEye service. NetEye cluster nodes should be able to talk to each other without restriction. For security reasons, you should not share this network with other systems.

Table 4 Network Link Requirements (NetEye Appliance)

Corporate Network

Private (Heartbeat) Network

Link to Corporate Network

Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration.

Link to Private (Heartbeat) Network

Although a single NIC will suffice, to allow service continuity in case of hardware malfunction we suggest that you plan for bonding of two network adapters in an active/standby (failover) configuration. Ensure inter-node, round-trip latency between each node is less than 300ms, with a target of 2ms as optimal, as stated in the RHEL Corosync documentation.

Table 5 IP Address Requirements

Corporate Network

Private (Heartbeat) Network

Physical Node IP Addresses

A dedicated IP address for each node. Each IP should be in the same network segment. This IP is used both for management tasks and active (from NetEye to devices) monitoring.

Private (Heartbeat) Network

Internal services running on a 2-nodes cluster with all modules installed require at least 30 IP Addresses. It is therefore strongly recommended to always configure a dedicated /24 network (e.g., 172.20.12.0/24) to avoid running out of available IPs and being forced to reconfigure the whole network if the cluster is expanded.

Management (iDRAC) IP addresses

A dedicated IP address for the management interface of each node

Cluster Virtual IP

One IP address used by the clustered system to allow monitoring and management from the public network

There should be no restrictions of any kind on intra-cluster communication. The following features should be enabled towards the cluster from the external network:

Table 6 TCP Cluster and Management Communication Requirements

Protocol/Port

Corporate Network

RMCP TCP 5900

iDRAC Access. Systems that need to manage a node via iDRAC should reach each Management IP Address on iDRAC dedicated ports. Please refer to Dell’s Support Documentation to understand the required ports.

TCP 80, 443

NetEye Web Console. Systems used to manage NetEye should reach the Cluster Virtual IP via HTTP/S.

System Updates. Each node should be able to reach the Wuerth Phoenix official RPM Repository, and the outgoing IP Address should be fixed (not dynamic).

TCP 22

Node SSH Console. Systems used to manage deep NetEye configuration and node configuration should reach every Physical Node IP via SSH.

TCP 25,465

SMTP Outbound. To allow sending of notifications, the required ports for SMTP outbound should be allowed from each Physical Node IP to the selected SMTP Relay Server.

TCP 123

NTP. Each node should be able to reach the official internal time source server with NTP Protocol.

TCP 389, 3268

Authentication and Authorization. To allow your Active Directory user accounts the ability to access NetEye, each node must be able to contact at least one DC on both ports 389 (LDAP) and 3268 (Global Catalog).

To allow your LDAP user account the ability to access NetEye, each node must be able to contact your LDAP Source on port 389 (or the Port of your choice).

TCP 7422

NATS Leaf Nodes. The NATS Leaf Nodes are configured to talk to the NATS Server of the NetEye Master.

The requirements in Table 7 include the cluster requirements specified by RedHat.

Table 7 Cluster-internal Port Requirements

Protocol/Port

Required for

UDP 623

iDRAC fencing

TCP 2224

Required on all nodes for node-to-node communication. It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbiters or the quorum device host.

TCP 2347

neteye-agent service.

TCP 3000

Grafana

TCP 3121

Required on all nodes if the cluster has any Pacemaker Remote nodes. Indeed, Pacemaker’s CRMd daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host’s network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes.

TCP 3306

MariaDB

TCP 4748

Communication with Tornado API from the GUI and for testing.

TCP 5403

Required on the quorum device host when using a quorum device with corosync-qnetd. The default value can be changed with the -p option of the corosync-qnetd command.

TCP 5404

Required on corosync nodes if corosync is configured for multicast UDP

TCP 5405, 5406

Required on all corosync nodes (needed by corosync)

TCP 5664”

Required by Icinga 2 for intra-cluster communication

TCP 7788-7799”

DRBD (may be extended as new resources/services are added)

TCP 8086”

InfluxDB

TCP 21064”

Required on all nodes if the cluster contains any resources requiring DLM (such as CVLM or GFS2)

Monitoring Requirements

Monitoring SHOULD NOT be carried out on the private (heartbeat) cluster network. All requirements in this section refer only to the corporate network.

At present, the cluster’s virtual IP is used for passive monitoring (devices autonomously sending information to NetEye) and agent deployment, while the node’s IP is used for active monitoring (requests from NetEye to devices).

Table 8 Monitoring Communication Requirements

Purpose

Description

Active monitoring through ICMP

Direct ICMP requests from NetEye to monitored devices

Active monitoring through SNMP

Direct SNMP requests from NetEye to monitored devices

Passive monitoring through SNMP

SNMP trap events sent from monitored devices to NetEye

Mail based monitoring

Email sent by devices or users to NetEye that trigger specific events

Table 9 General Monitoring Requirements

Protocol/Port

Description

ICMP

Test via ping to check if a host is alive

TCP 4222, 4244

from/to NetEye 4 from/to target systems (APM)

TCP 5001

from NetEye 4 to target systems: plugin check_iperf

TCP 5665

from NetEye 4 to target systems: server monitoring (ICINGA2 protocol)

TCP 5666

from NetEye 4 to target systems: server monitoring (NRPE protocol)

TCP 5667

from NetEye 4 to target systems: passive monitoring (NSCA protocol)

UDP 161

from NetEye 4 to target systems: device/server monitoring (SNMP protocol)

UDP 162

from system target to NetEye 4: TRAP SNMP

Table 10 Windows OS-Specific Monitoring Requirements

Port

Description

TCP 135

from NetEye 4 to target systems: Windows server monitoring (WMI protocol) and Windows admin user (more ports are required)

TCP 12489

from NetEye 4 to target systems: plugin check_NT

Table 11 Linux/Unix OS-Specific Monitoring Requirements

Port

Description

TCP 22

from NetEye 4 to target systems: server monitoring (SSH protocol with check_by_ssh)

Additional Requirements for Monitoring

  • For Sahi and/or check_webpage, create a dedicated user account if required.

  • Enable all TCP and UDP ports needed for specific monitoring requirements, such as check_tcp and/or check_udp for network service ports like: 23 (Telnet), 53 (DNS), 123 (NTP), 3306 (MySQL), etc. For a full list of reserved ports, you can consult this website.

  • Enable the SNMP v2c protocol and community on all servers and devices.

  • You may need to contact your NetEye 4 consultant for the following requirements:

    • Create a database monitoring user, where the rights granted will depend on the database’s vendor

    • Create a user on HyperV systems

    • Allow connections between NetEye 4 and all VLANs/Subnets involved in monitoring

Access and Notification Requirements

Account Management

In order to log in to NetEye 4 with a centralized account, create an LDAP/AD user with read permissions on the following tree objects:

  • Account name

  • Password

  • Email address

You will also need to open the following TCP ports from NetEye 4 to the system directory:

Table 12 LDAP/AD Port Requirements

Port

Description

TCP 389

LDAP/AD domain-specific information

TCP 636

LDAP/AD domain-specific information encrypted over SSL by default

TCP 3268

for LDAP queries via Global Catalog

TCP 3269

for LDAP queries via Global Catalog encrypted over SSL by default

Notifications

  • Relay all email sent to eventgw@domain on your SMTP server to the NetEye 4 Event Handler

  • In order to send SMS messages, unset the PIN on your SIM card

  • We provide two types of modem:

Elasticsearch Only Nodes

Elasticsearch Only nodes are NetEye 4 nodes which works only as part of Elasticsearch cluster, therefore does not expose ports required by other services. Elasticsearch nodes communicate on the private (heartbeat) cluster network and therefore no port should be explicitly opened.

Voting Only Nodes

Voting Only nodes are NetEye 4 nodes which only provides quorum to several components of NetEye cluster: DRBD, PCS and Elasticsearch. A Voting only nodes does expose any service and communicate with other cluster nodes on the private (heartbeat) cluster network and therefore no port should be explicitly opened.

Individual Module Requirements

Individual NetEye modules may have their own specific requirements that will need to be taken into consideration if they are enabled. In particular, when configuring cluster nodes, you should also make sure that the following requirements are included for each node.

Log management

Table 13 Port Requirements

Port

Description

UDP 514

from NetEye 4 to target systems: to get logs data (Log Management: syslog/rsyslog)

TCP 514

from NetEye 4 to target systems: to get logs data (Log Management: syslog/rsyslog)

TCP 6161

from NetEye 4 to target systems: to get logs data (Log Management: syslog/splunk)

SIEM

SIEM module requires Log management module to work.

Table 14 Port Requirements

Port

Description

All ports required by Log Management Module plus:

UDP 2055

Netflow listening port (Netflow protocol)

TCP 5044

Logstash input for Beats

ntopng

Two ports must be opened in order to allow the communication between ntopng and the nProbe:

Table 15 Port Requirements Tile Map URL Parameters

Port

Description

TCP 5556

zmq

TCP 6363

nProbe (Netflow collector)

NATS - Messaging system

From NetEye 4.12, Tornado relies on the NATS server messaging system for event collection. TCP Communication is deprecated and support for it may be removed at any point in the future.

Table 16 NATS Port Requirements

Port

Description

DEPRECATED TCP 4747

Tornado listening socket

TCP 4222

NATS server (required for Tornado proper communication)