User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon NetEye Update & Upgrade
Before you start Update Procedure Single Node Upgrade from 4.41 to 4.42 Cluster Upgrade from 4.41 to 4.42 Satellite Upgrade from 4.41 to 4.42 DPO machine Upgrade from 4.41 to 4.42 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting
NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Single Node Cluster Node Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring 3b Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis 3b Audit Log 3b Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes Test Cases Dashboard Use Cases Overview Architecture Authorization Elasticsearch Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Elastic APM Elastic RUM Log Manager - Deprecated Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction Getting Started SATAYO Items Settings Managed Service Mitre Attack Coverage Changelog Before you start Update Procedure Single Node Upgrade from 4.41 to 4.42 Cluster Upgrade from 4.41 to 4.42 Satellite Upgrade from 4.41 to 4.42 DPO machine Upgrade from 4.41 to 4.42 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary 3b

Cluster Upgrade from 4.41 to 4.42

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.41 to 4.42.

During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.42 is possible only from 4.41; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.

Breaking Changes

NetEye SIEM feature module renaming

From NetEye 4.42 onwards, the SIEM feature module is now called Elastic Stack. This change better reflects the module’s functionalities, which encompass the entire Elastic Stack, including Observability and not only SIEM.

This change includes:

  1. In the User Guide, the feature module will be referred to as Elastic Stack instead of SIEM

  2. To install the feature module on systems without it, use the dnf group neteye-elastic-stack instead of neteye-siem, as explained :ref:here <neteye-modules>.

  3. If you have NetEye SIEM installed, upgrading to 4.42 will automatically install the dnf group neteye-elastic-stack.

To ensure a smooth transition, the old dnf group neteye-siem will remain installed after the upgrade but will be removed in future NetEye versions. Any custom integration or automation relying on the presence of the old neteye-siem dnf group will continue to function as before. However, you should update it to refer to the new neteye-elastic-stack dnf group to prepare for the removal of the old group in future versions.

MariaDB Galera Cluster

During the upgrade to NetEye 4.42, MariaDB will be migrated to a MariaDB Galera Cluster. This improvement will enhance the reliability of the database system on NetEye clusters since the service will be distributed across multiple nodes instead of relying on a single instance. This will not only enhance the general reliability of the database system but also improve and minimize the downtime of the database and all the services that rely on it in case of a node failure or a service restart. For system consistency, the MariaDB Galera migration will be performed also on NetEye single-node installations, where the MariaDB instance will be migrated to a single-node Galera Cluster.

Deprecated Settings

With the transition to MariaDB Galera Cluster in NetEye 4.42, several default settings and functionalities have been updated or deprecated. Please review the following changes to ensure compatibility and optimal performance:

Directory Location Change

With the migration to MariaDB Galera Cluster, the MariaDB directory has been moved from /neteye/shared/mysql to /neteye/local/mariadb. This is a breaking change that affects any custom backup scripts targeting the old location, monitoring configurations checking the old path, and custom integrations or automations accessing data/configuration in the old location.

All custom MariaDB configuration files placed in the my.cnf.d folder will be automatically migrated to the new directory during the upgrade process.

If you have any scripts, monitoring, or configurations that reference the old path, please update them to use the new location to ensure proper functionality after the upgrade.

User Privileges Management

In Galera Cluster, modifying user privileges requires the use of the GRANT statement. Directly inserting into the mysql.user table (e.g., INSERT INTO mysql.user ...) is not supported and will not propagate changes across the cluster. Always use GRANT to ensure consistent and reliable privilege updates.

Unsupported Locking Mechanisms

The following locking operations are not supported in Galera Cluster and may lead to errors or undefined behavior:

  • LOCK TABLES and UNLOCK TABLES

  • GET_LOCK and RELEASE_LOCK

If you have custom applications interacting with the MariaDB database, please ensure they do not perform these operations and consider using alternative methods for synchronization and locking, such as transactions or application-level locks.

More information about the differences between MariaDB and Galera Cluster can be found in the Official Galera documentation

Large Pages Support

If you have previously configured or wish to configure large pages for MariaDB performance optimization (using innodb_large_prefix setting), please note that with the migration to Galera Cluster, you will need to create a systemd drop-in file for the mariadb-galera.service to grant the necessary CAP_IPC_LOCK capability to the service. This configuration is not automatically migrated during the upgrade and requires manual intervention only if you need large pages support.

Upgrade and Migration procedure

The NetEye upgrade procedure to 4.42 will automatically migrate the existing MariaDB instance, that is managed by PCS to a brand new architecture based on a Galera Cluster, Nginx and Keepalived to maximize the availability of the database service. This migration will be performed automatically during the upgrade process, no manual intervention is required and no additional disk space is needed. Here is a brief overview of the migration steps:

  1. Preparation of Galera Cluster: A new Galera Cluster is set up, with the existing PCS-managed MariaDB instance serving as the first node in this cluster.

  2. Node-by-Node Migration: Each node, one at a time, except the one hosting the current MariaDB instance, undergoes the following steps:

    • DRBD Volume Removal: The existing DRBD volume assigned to MariaDB is deleted.

    • New Volume Creation: A new storage volume is created for Galera with the same size as the old DRBD volume and mounted at /neteye/local/mariadb.

    • Galera Node Initialization: The Galera instance is started on the node and joins the cluster.

    • Data Synchronization: The joining node synchronizes its data with the cluster. Depending on the database size, this process may take some time.

  3. Finalization: Once all database service nodes have joined the Galera Cluster, the following final steps are performed:

    • Pacemaker Resource Removal: The Pacemaker resource managing the MariaDB instance is deleted.

    • DRBD Volume Deletion: The last remaining DRBD volume associated with MariaDB is removed.

    • Final Node Integration: The last node joins the Galera Cluster, with its data volume correctly placed at /neteye/local/mariadb.

The migration process is designed to minimize the downtimes and to ensure a smooth transition to the new Galera Cluster architecture.

Prerequisites

Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.

  1. All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.

  2. NetEye must be up and running in a healthy state.

  3. Disk Space required:

    • 3GB for / and /var

    • 150MB for /boot

  4. If the NetEye Elastic Stack module is installed:

    • The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.

  5. We’ve deprecated the default option in Proxy IP Mode under Configuration / Modules / neteye / Configuration. You can check how to configure it here

  1. To prepare for the MariaDB Galera migration, ensure that the port 3307 is free and available on all the NetEye nodes interface. In order to grant a smooth upgrade, consider launching the command neteye node check-upgrade-prerequisites before the upgrade to check if the prerequisites are met.

1. Run the Upgrade

The Cluster Upgrade is carried out by running the following command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

Warning

If the NetEye Elastic Stack feature module is installed and a new version of Elasticsearch is available, please note that the procedure will upgrade one node at the time and wait for the Elasticsearch cluster health status to turn green before proceeding with the next node. For more information, please consult the dedicated section.

After the command was executed, the output will inform if the upgrade was successful or not:

  • In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.

  • In case the command fails refer to the troubleshooting section.

2. Reboot Nodes

Restart each node, one at a time, to apply the upgrades correctly.

  1. Run the reboot command

    cluster-node-N# neteye node reboot
    
  2. In case of a standard NetEye node, put it back online once the reboot is finished

    cluster-node-N# pcs node unstandby --wait=300
    

You can now reboot the next node.

3. Cluster Reactivation

At this point you can proceed to restore the cluster to high availability operation.

  1. Bring all cluster nodes back out of standby with this command on the last standard node

    cluster# pcs node unstandby --all --wait=300
    cluster# echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.

  2. Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please contact our service and support team before proceeding.

  3. Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:

    cluster# pcs property set stonith-enabled=true
    

4. Additional Tasks

Enabling of the Content-Security-Policy (CSP) header

With the upgrade to IcingaWeb2 2.12.x we allow you to enable the Content-Security-Policy (CSP) header for the NetEye web interface. The CSP header helps protect your NetEye installation from cross-site scripting (XSS) attacks. The toggle for this can be found in the NetEye web interface under Configuration > Application > Enable strict content security policy.

Warning

The strict CSP header feature must be enabled before upgrading to the next release of NetEye (4.43).

IcingaWeb2 modules migration

Please make sure that all third-party icingaweb2 modules installed on the system are compatible with the CSP policies before enabling this feature to avoid any issues.

To do it you can follow the steps below:

  1. Check and in case fix/upgrade a module.

  2. Enable the strict CSP flag from the IcingaWeb2 settings.

  3. Verify if everything is working.

  4. In case of problems:

    1. Disable the flag.

    2. Fix the incompatibilities.

    3. Repeat from step 1.

  5. Repeat the process for the next module.

SLM report styling under the Content-Security-Policy (CSP) header

With the introduction of the CSP headers, we had to make minor breaking changes to the way styling is handled in the SLM reports. SLM reports can no longer use the inline style attributes. If any were used in a custom template, they now need to be migrated to the html style elements and applied with classes. Furthermore, if any html style elements are used in the custom template, they will now need to be updated to include the icingaweb2 style nonce to be compliant with the CSP header. Please refer to the appropriate section in the customizing templates section of the documentation for more information.