User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon Setup
Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics
Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Single Node Cluster Node Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis Audit Log Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes Test Cases Dashboard Use Cases Overview Architecture Authorization Kibana Elasticsearch Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Elastic APM Elastic RUM Elastic XDR Log Manager - Deprecated Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction Getting Started SATAYO Items Settings Managed Service Mitre Attack Coverage Changelog Before you start Update Procedure Single Node Upgrade from 4.42 to 4.43 Cluster Upgrade from 4.42 to 4.43 Satellite Upgrade from 4.42 to 4.43 DPO machine Upgrade from 4.42 to 4.43 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration NetEye Cluster on Microsoft Azure Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary

Resources Tuning

This section will contain a collection of suggested settings for various services running on NetEye.

MariaDB

MariaDB is started with default upstream settings. If the size of an installation requires it, resource usage of MariaDB can be adjusted to meet the higher requirements for performance. The following settings can be added to a file /neteye/shared/mysql/conf/my.cnf.d/custom.conf:

[mysqld]
innodb_buffer_pool_size=16G
tmp_table_size = 512M
max_heap_table_size = 512M
innodb_sort_buffer_size=16000000
sort_buffer_size=32M

Icingaweb2 GUI

Performance of the Icingaweb2 Graphical User Interface, can significantly be improved in high load environments by adding INDEX and updating the COLUMN definition of hostgroups and history related tables. To do this, execute the below queries manually:

ALTER TABLE icinga_hostgroups MODIFY hostgroup_object_id bigint(20) unsigned NOT NULL;
ALTER TABLE icinga_hostgroups ADD UNIQUE INDEX idx_hostgroups_hostgroup_object_id (hostgroup_object_id);
ALTER TABLE icinga_commenthistory ADD INDEX idx_icinga_commenthistory_entry_time (entry_time);
ALTER TABLE icinga_downtimehistory ADD INDEX idx_icinga_downtimehistory_entry_time (entry_time);
ALTER TABLE icinga_notifications ADD INDEX idx_icinga_notifications_start_time (start_time);
ALTER TABLE icinga_statehistory ADD INDEX idx_icinga_statehistory_state_time (state_time);

InfluxDB

InfluxDB is a time series database designed to handle high volumes of write and query loads in NetEye. If you want to learn more about InfluxDB you can refer to the official InfluxDB documentation

Migration of inmem (in-memory) indices to TSI (time-series)

From NetEye 4.14, InfluxDB will use the Time Series Index (TSI).

However, the existing setup will still use the TSM index for writing and fetching data until you perform the migration procedure, which consists of the following steps.

  1. Build TSI by running the influx_inspect buildtsi command:

    In a cluster environment, the below command must be executed on the node on which the InfluxDB resource is running:

    sudo -u influxdb influx_inspect buildtsi -datadir /neteye/shared/influxdb/data/data -waldir /neteye/shared/influxdb/data/wal -v
    

    Upon execution, the above command will build TSI for all the databases that exist in *

    Note

    If you want to build TSI only for a specific database then add the -database <database_name> parameter to the above command.

  2. Restart the influxdb service:

  • Single node:

    systemctl restart influxdb
    
  • Cluster environment:

    pcs resource restart influxdb
    

The official documentation of InfluxDB Upgrade contains more information about the inmem (in-memory) to TSI (time-series) migration process.

How to Enable Load Balancing For Logstash

Warning

This functionality is in beta stage and may be subject to changes. Beta features may break during minor upgrades and their quality is not ensured by regression testing.

The load balancing feature for logstash exploits NGINX ability to act as a reverse proxy and distribute incoming (logstash) connections among all nodes in the cluster. In this way, logstash will no longer be a cluster resource anymore, but a standalone service running on each node of the cluster.

Note however, that if you enable this feature, you will lose the ability to sign the log files. This happens because with this setup, logmanager has access only to the log files that are present on file systems mounted on the node where it is running.

Indeed, rsyslog can not take advantage of the load balancing feature, therefore only the logs on the node on which logmanager is running will be signed.

In the case of Beat, log files will be sent through the load balancer and therefore they will not be signed.

This how-to will guide you in setting up load balancing for logstash. In a nutshell, you need first to disable the logstash cluster resource, then to modify or to add logstash and NGINX configurations, and finally to keep the logstash configuration in sync on all nodes.

More in details, this are the steps:

  1. Permanently disable the cluster resource for logstash: run pcs resource disable logstash

  2. Create a local service of logstash on each node in the cluster, by following these steps:

    1. The configuration files will be stored in /neteye/local/logstash/conf, so copy them over from /neteye/shared/logstash/conf

      1. Fix all the paths into the conf files:

        find /neteye/local/logstash/conf -type f -exec sed -i 's/shared/local/g' "{}" \;
        
    2. Edit both the /neteye/shared/logstash/conf/sysconfig/logstash and /neteye/local/logstash/conf/sysconfig/logstash files and add to them the following lines:

      LS_SETTINGS_DIR="/neteye/local/logstash/conf/"   OPTIONS="--config.reload.automatic"
      
    3. Add the host directive in /neteye/local/logstash/conf/conf.d/0_i03_agent_beats.input (use the cluster internal network IP): host => “192.168.xxx.xxx”

    4. Create a new logstash service (call it e.g., logstash-local.service) with the following content:

      [Unit]
      Description=logstash local
      
      [Service]
      Type=simple
      User=logstash
      Group=logstash
      EnvironmentFile=-/etc/default/logstash
      EnvironmentFile=-/neteye/local/logstash/conf/sysconfig/logstash
      ExecStartPre=/usr/share/logstash/bin/generate-config.sh
      ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/neteye/local/logstash/conf/" $OPTIONS Restart=always
      WorkingDirectory=/
      Nice=19
      LimitNOFILE=16384
      
      [Install]
      WantedBy=multi-user.target
      
    5. Add the service into the neteye cluster local systemd targets. You can refer to the Cluster Technology and Architecture chapter of the user guide for more information.

    6. Edit file /etc/hosts to point the host logstash.neteyelocal to the cluster IP.

  3. Add the NGINX load-balancing configuration in a file called logstash-loadbalanced.j2 The name is very imporant because it will be used by neteye install to setup the correct mapping between the logstash service and NGINX. The file needs to have the following content. Please, pay special attention in copying the whole snippet AS-IS, especially the three-line of the for cycle, because it is essential in configuring NGINX on all the cluster nodes:

    upstream logstash\_ingest {
       {% for node in nodes %}
         server {{ hostvars[node].internal\_node\_addr }}:5044;
       {% endfor %}
    }
    
     server {
       listen logstash.neteyelocal:5044;
       proxy_pass logstash_ingest;
     }
    
  4. Remember that the logstash standalone configuration must be kept in sync on all nodes, therefore the /neteye/local/logstash/conf/ directory must have the same content on all nodes. To achieve this goal you can for example set up a cron job that uses rsync to maintain the synchronisation.

  5. Run neteye install only once on any cluster node.

  6. Start the local logstash service on every node: systemctl start logstash-local on every node

SIEM Additional Tuning (X-Pack)

Encrypt sensitive data check

If you use Watcher and have chosen to encrypt sensitive data (by setting xpack.watcher.encrypt_sensitive_data to true), you must also place a key in the secure settings store.

To pass this bootstrap check, you must set the xpack.watcher.encryption_key on each node in the cluster. For more information, see the official documentation.