User Guide

Cluster Upgrade from 4.36 to 4.37

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.36 to 4.37.

During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.37 is possible only from 4.36; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.

Breaking Changes

Removal of Tornado Legacy and Event Handler

Starting from NetEye 4.37, the Event Handler and Tornado Legacy modules are no longer available. All their functionalities are now integrated into the Tornado module. It is essential to transfer all Event Handler events processing configuration to Tornado before proceeding with the upgrade.

The Tornado Legacy module served as the initial GUI version for managing the Tornado daemon configuration hence it does not need migration. All its functionalities, and much more, are now available in the new Tornado module, offering a completely new and useful user experience.

In order to proceed with the upgrade NetEye must be aware that the Event Handler is no longer used. For that enable the toggle in the “Event Handler removal: migration completed” section of the Event Handler module.

Tornado Rules Filesystem Layout

With the NetEye 4.37 we introduced the Iterator node feature in Tornado. To make the config for that possible and clean up some ambiguous parsing behaviour as well as to improve the Tornado code-base, we slightly refactored the way the config will be loaded in a backwards-incompatible way.

The config will be migrated automatically for you, but if you are editing the configuration on the filesystem, you will have to adhere to the new structure. You can find the whole documentation of the new structure in the Processing Tree Configuration section.

To summarize the changes made:

  • The config files for Filters are now required to be called filter.json. Before, any name with the file-ending .json was allowed.

  • Rulesets are now required to contain a ruleset.json file and the rules are now to be located in the subdirectory rules.

  • Filters and Rulesets need to contain the fields type and name, the directory name is no longer relevant for the node name.

  • Support for implicit Filters was removed.

  • The root node is now a fully virtual Filter and cannot be overwritten.

  • The config version is denoted in the file version.json in the root of the rules directory rules.d

Relying on the filesystem for future automation is strongly discouraged. We recommend you to use the Tornado API instead.

Tornado Configuration Export Compatibility

Due to the introduction of the Iterator nodes in Tornado, configurations exported starting from NetEye 4.37 and later are not backward compatible with versions 4.36 and earlier. However, configurations exported from older versions remain compatible with newer versions and can still be imported into 4.37.

Upgrade of Elastic Stack to version 8.14.3

In NetEye 4.37, Elastic Stack was upgraded from version 8.11.3 to version 8.14.3.

To ensure the full compatibility of your installation, please review the official release notes, focusing in particular on the breaking changes of the Elastic Stack components: Elasticsearch, Logstash, Kibana, Beats and Elastic Agent.

Among the breaking changes listed by Elastic, we would like to emphasize those that may impact your NetEye installation:

Please note that, for a complete review, the release notes of all versions between 8.11.3 and 8.14.3 should be reviewed. Here are the links to the breaking changes of the minor upgrades: 8.12, 8.13 and 8.14.

Prerequisites

Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.

  1. All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.

  2. NetEye must be up and running in a healthy state.

  3. Disk Space required:

    • 3GB for / and /var

    • 150MB for /boot

  4. If the SIEM module is installed:

    • The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.

  5. If the Alyvix module is installed:

    • The Multitenancy feature should be enabled for the Alyvix module. For more information, please refer to the Alyvix Multitenancy migration documentation.

1. Run the Upgrade

The Cluster Upgrade is carried out by running the following command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

Warning

If the SIEM feature module is installed and a new version of Elasticsearch is available, please note that the procedure will upgrade one node at the time and wait for the Elasticsearch cluster health status to turn green before proceeding with the next node. For more information, please consult the dedicated section.

After the command was executed, the output will inform if the upgrade was successful or not:

  • In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.

  • In case the command fails refer to the troubleshooting section.

2. Reboot Nodes

Restart each node, one at a time, to apply the upgrades correctly.

  1. Run the reboot command

    cluster-node-N# neteye node reboot
    
  2. In case of a standard NetEye node, put it back online once the reboot is finished

    cluster-node-N# pcs node unstandby --wait=300
    

You can now reboot the next node.

3. Cluster Reactivation

At this point you can proceed to restore the cluster to high availability operation.

  1. Bring all cluster nodes back out of standby with this command on the last standard node

    cluster# pcs node unstandby --all --wait=300
    cluster# echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.

  2. Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.

  3. Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:

    cluster# pcs property set stonith-enabled=true
    

4. Additional Tasks

In this upgrade, no additional manual step is required.