User Guide

Feature Modules Installation

NetEye Core is the set of most commonly used functionalities offered by the platform, including monitoring, visualization (both dashboards and maps), configuration, reporting, and event handling.

However, the NetEye modular architecture supports the installation of additional Feature Modules that extend the NetEye Core functionalities. This separation allows to customize NetEye in order to address specific customer needs.

NetEye Modules vs. Preview Software

There are two types of additional modules, (NetEye) Feature Modules and Preview Software. Feature Modules are fully fledged modules, whose functionalities are well defined and established , whereas Preview Software are modules that are not yet completed and provide a set of functionalities that might change in the future; they might be installed to check new software that will be later become part of the official NetEye platform.

Note also that each Feature Module needs the installation of a group of packages, while a Preview Software is limited to one or few packages. They also belong to different repositories and can be installed from the command line.

In order to install any of them, execute the corresponding command, which changes slightly depending on the module’s type (please consider following the advice on Safe Command Execution when you do this), and then follow the procedure for either a Single Node or Cluster Node.

NetEye Modules

Each of these modules has its own, distinct contract and requires NetEye Core. Also the SIEM module dependency from Log Manager has been removed, starting from the 4.12 release and now requires only NetEye Core.

Module

Requires

Yum group name

Log Manager

NetEye Core

neteye-logmanagement

SIEM

NetEye Core

neteye-siem

vSphereDB

NetEye Core

neteye-vmd

SLM

NetEye Core

neteye-slm

Asset

NetEye Core

neteye-asset

ntopng

NetEye Core

neteye-ntopng

Command Orchestrator

NetEye Core

neteye-cmd

Preview Software

Modules of this type can be installed whenever desired, and reside in the NetEye Extras repository.

Module

Requires

Yum group name

Elastic Agent

SIEM

elastic-agent

Single Node

To install a NetEye Module, run the following command with the appropriate Yum group name from the table above:

# yum -y groupinstall <yum-group-name> --enablerepo=neteye

The Preview Software have a slight different syntax:

# yum -y groupinstall <yum-group-name> --enablerepo=neteye --enablerepo=neteye-extras

Here, <yum-group-name> is taken from the table in the Preview Software section

Once done, please follow the procedure needed to update a NetEye single instance, then the directions on section Refreshing the additional module, to complete the overall installation.

Cluster Node

Differently from the Single Node, the NetEye Module or Preview Software must be installed on every node of the cluster with the same commands described in the previous section. So, the command(s):

# yum -y groupinstall <yum-group-name> --enablerepo=neteye
# yum -y groupinstall <yum-group-name> --enablerepo=neteye --enablerepo=neteye-extras

must be run on each node of the cluster.

After the yum installation of the <feature_module> OR <preview_software>, these additional steps are needed:

  • Look for the template file having filepath with pattern /usr/share/neteye/cluster/templates/Services-<name>-*.conf.tpl (where <name> is the name of the <feature_module> OR <preview_software> you are installing, and the * is a wildcard for any string). If any such file does not exist, skip the following steps and go to the next section.

  • If, on the contrary, any such file exists, adapt it to the settings of your cluster, and save it to a file with the same name without the .tpl suffix.

  • Now, for each file saved in the previous step, create the cluster resource by executing the following command on one of the nodes of the cluster (replace <name> with the name of the <feature_module> OR <preview_software> you are installing, and the * with the string that completes the actual filename):

    # /usr/share/neteye/scripts/cluster/cluster_service_setup.pl -c /usr/share/neteye/cluster/templates/Services-<name>-*.conf``
    

    When the execution of the script above has finished, please perform the steps described the procedure to Update a NetEye Cluster.

Example: I want to install the ‘asset’ feature module on a NetEye cluster.

After performing yum groupinstall neteye-asset --enablerepo=neteye on each node of the cluster, on one node I find the following files with pattern /usr/share/neteye/cluster/templates/Services-asset-*.conf.tpl:

/usr/share/neteye/cluster/templates/Services-asset-glpi.conf.tpl
/usr/share/neteye/cluster/templates/Services-asset-ocsinventory-ocsreports.conf.tpl
/usr/share/neteye/cluster/templates/Services-asset-ocsinventory-server.conf.tpl

I adapt them to my cluster settings (adapt the ip_pre, cidr_netmask and check the drbd_minor and the drbd_port in this case) and save them in the files:

/usr/share/neteye/cluster/templates/Services-asset-glpi.conf
/usr/share/neteye/cluster/templates/Services-asset-ocsinventory-ocsreports.conf
/usr/share/neteye/cluster/templates/Services-asset-ocsinventory-server.conf

I create the cluster resources with the commands:

# /usr/share/neteye/scripts/cluster/cluster_service_setup.pl -c /usr/share/neteye/cluster/templates/Services-asset-glpi.conf
# /usr/share/neteye/scripts/cluster/cluster_service_setup.pl -c /usr/share/neteye/cluster/templates/Services-asset-ocsinventory-ocsreports.conf
# /usr/share/neteye/scripts/cluster/cluster_service_setup.pl -c /usr/share/neteye/cluster/templates/Services-asset-ocsinventory-server.conf

Finally, follow the procedure to Update a NetEye Cluster. To complete the overall installation, please follow the directions on section Refreshing the additional module.

Verify if a module is running correctly

After installing a Feature Module or Preview Software, you need to make sure that all services are running.

The commands to be used differ on a Single Node and on Cluster Installations.

NetEye Single Node Installation

The neteye status command outputs a list of the status of all NetEye services, similar to the following snippet:

DOWN [3] elastic-blockchain-proxy.service
DOWN [3] elasticsearch.service
DOWN [3] eventhandlerd.service
UP   [0] filebeat.service
UP   [0] grafana-server.service
UP   [0] httpd.service
DOWN [3] icinga2-master.service
UP   [0] influxdb.service
DOWN [3] kibana-logmanager.service
DOWN [0] lampod.service
UP   [0] logstash.service
UP   [0] mariadb.service
DOWN [3] nats-server.service
UP   [0] neteye-agent.service
UP   [0] nginx.service
UP   [0] nprobe.service
UP   [0] ntopng.service
UP   [0] redis.service
UP   [0] rh-php73-php-fpm.service
UP   [0] rsyslog-logmanager.service
UP   [0] slmd.service
UP   [0] smsd.service
UP   [0] snmptrapd.service
UP   [0] tornado.service
DOWN [3] tornado_email_collector.service
DOWN [0] tornado_icinga2_collector.service
DOWN [3] tornado_nats_json_collector.service
DOWN [3] tornado_webhook_collector.service

Note

Output may vary, depending on both installed modules and running services.

Suppose you have just install Tornado and all its collectors: they should be running, but are marked as DOWN. This means that something has gone wrong and you need to understand why. You can therefore check the dedicated troubleshooting section for directions.

NetEye Cluster Installation

On a cluster it is necessary to differentiate between clustered and non clustered services: Non clustered services, which for example include Elasticsearch, follow the same approach shown in the previous section and in case of issues, can be inspected with the same commands mentioned in the corresponding troubleshooting section.

Clustered services, on the contrary, require a different approach. Indeed, the neteye status, neteye start, and neteye stop commands can not be used, because they are not available on cluster.

Note

Clustered services are referred to as Resources. For example, a Tornado instance running on a NetEye single installation is a service, while a Tornado instance running on a NetEye cluster is a resource.

Therefore, to verify if resources are correctly running, use the pcs status command, which outputs the status of the cluster and all the resources, similarly to the following excerpt.

Cluster name: NetEye
Stack: corosync
Current DC: neteye01.local (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Wed Jul 28 09:47:52 2021
Last change: Tue Jul 27 15:04:36 2021 by root via cibadmin on neteye02.local
2 nodes configured
74 resource instances configured
Online: [ neteye01.local neteye02.local ]
Full list of resources:
 cluster_ip    (ocf::heartbeat:IPaddr2):    Started neteye02.local
 Resource Group: tornado_rsyslog_collector_group
     tornado_rsyslog_collector_drbd_fs    (ocf::heartbeat:Filesystem):    Started neteye02.local
 Resource Group: tornado_group

In case a resource is not starting correctly, it will be listed at the end of the output (see snippet below) as Failed. You need to understand why it is not running: the dedicated cluster troubleshooting section features options that you can apply to find the root cause of the problem.

Failed Resource Actions:
* tornado_email_collector_monitor_30000 on neteye02.local 'not running' (7): call=414, status=complete, exitreason='',
    last-rc-change='Wed Jul 28 09:57:21 2021', queued=0ms, exec=0ms

Refreshing the additional module

If the procedure you followed above was successful, you can now refresh the additional module with these steps:

  • Refresh your browser window. This will ensure that the new module appears in the NetEye menu and all Javascript and CSS is reloaded properly.

  • Log out of NetEye and then log back in so that any permissions or roles required by the new module will take effect.