User Guide

Cluster Upgrade from 4.23 to 4.24

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.23 to 4.24.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.24 is possible only from 4.23; for example, if you have version 4.14, you must first upgrade to the 4.15, then 4.16, and so on.

Before starting an upgrade, you should very carefully read the latest release notes on NetEye’s blog and check the feature changes and deprecations specific to the version being upgraded. You should check also the whole section Breaking Changes below.

The remainder of this section is organised as follows. Section Breaking Changes introduces substantial changes that users must be aware of before starting the upgrade procedure and may require to carry out some tasks before starting the upgrade; section Prerequisites provide information to be known before starting the upgrade procedure; section Conventions Used defines some notation used in this procedure; section NetEye Single Node Upgrade Procedure presents the actual procedure, including directions for special nodes; section Cluster Reactivation instructs on how to bring the NetEye Cluster back to complete functionality, and finally section Additional Tasks shows which tasks must be executed after the upgrade procedure has been successfully executed.

Breaking Changes

NetEye Setup

Tags and new neteye node tags command

Release 4.23 introduced RHEL 8 as new underlying operating system, together with (optional) Red Hat Insights, while release 4.24 added automatic registration to both.

This means that a NetEye Single Node, Cluster, or Satellite must record a few information (Customer ID, contract number, type of installation and deployment, see Section neteye node tags set for more details) that will be used for registering the node to Red Hat.

The command neteye node tags set, used to record the data required, will be installed during the upgrade. This means that the Upgrade process will interrupt to allow you to use the command and register the data, after which you can resume the upgrade.

Make sure to have all these data at hand during the Upgrade process; if you do not know some or all of them, please refer to the official channels: sales, consultants, or support portal to receive them.

Elastic Stack

JVM Options

From NetEye 4.24 onwards, the JVM options file /neteye/local/elasticsearch/conf/jvm.options contains now only the standard options shipped by Elastic, with all other additional options placed in separated options files inside the /neteye/local/elasticsearch/conf/jvm.options.d/ folder, including NetEye default ones. In case custom options were present in the /neteye/local/elasticsearch/conf/jvm.options file, an rpmsave file will be created and the customizations must be migrated to an .options file in the /neteye/local/elasticsearch/conf/jvm.options.d/ folder, for example /neteye/local/elasticsearch/conf/jvm.options.d/02_custom_jvm.options. In case of a cluster, this operation must be performed on all nodes where Elasticsearch is installed.

If you would like to specify or override some options, please refer to Section Elasticsearch JVM Optimization of the User Guide.

Netflow Filebeat module

The 4.24 version of NetEye introduces the possibility to modify the Netflow Filebeat module parameters: its status (enabled or disabled), host and port, without generating an rpmnew file during future updates or file renaming, preventing its re-installation in an enabled state. This ability is achieved by adopting environment variables with default values, which can be overridden by defining them in the /neteye/shared/filebeat/conf/sysconfig/filebeat-user-customization file, as described in Section Filebeat Netflow module specific configuration.

Logstash pipelines configurations

With the new Elastic Stack version, Logstash pipelines configuration files are now set as NetEye config files, resulting in users customizations being preserved during updates. Indeed the logic behind the storage of Logstash credentials has been completely reworked and they are now saved as environmental variables. Therefore, during this update, the following four rpmnew files, that need to be migrated are introduced:

/neteye/shared/logstash/conf/conf.auditbeat.d/1_f020_enrich_host.filter.rpmnew
/neteye/shared/logstash/conf/conf.filebeat.d/1_f020_enrich_host.filter.rpmnew
/neteye/shared/logstash/conf/conf.winlogbeat.d/1_f020_enrich_host.filter.rpmnew
/neteye/shared/logstash/conf/conf.ebp.d/2_o01_ebp_proxy.output.rpmnew

More details on how Logstash pipelines configurations can now be more easily customized can be found in Section Logstash Configuration.

Prerequisites

Upgrading a NetEye Cluster will take a nontrivial amount of time. During the upgrade, individual nodes will be put into standby mode and so overall performance will be degraded until the upgrade procedure is completed and all nodes are removed from standby mode.

An estimate for the time needed for a full upgrade (update + upgrade) when the cluster is healthy, there is no additional NetEye modules installed, and the procedure is successful is approximately 30 minutes, plus 15 minutes per node.

So for instance on an 3-node cluster it may take approximately 1 hour and 15 minutes (30 + 15*3).

Warning

This estimate does not include the time required to download the packages and for the manual intervention: migration of configurations due to breaking changes, failure of tasks during the execution of the neteye update and neteye upgrade commands.

Conventions Used

A NetEye cluster can be composed by different types of nodes, including Elastic-only and Voting-only nodes, which require a different upgrade procedure. Therefore, the following notation has been devised, to identify nodes in the cluster.

  • (ALL) is the set of all cluster nodes

  • (N) indicates the NetEye master node of the Cluster

  • (E) is an Elastic-only node

  • (V) is a Voting-only node

  • (OTHER) is the set of all nodes excluding (N), (E), and (V)

For example if we take the sample cluster defined in The Elected NetEye Master, (ALL) is my-neteye-01, my-neteye-02, my-neteye-03, my-neteye-04, and my-neteye-05.

  • (N) is my-neteye-01

  • (OTHER) is composed by my-neteye-02 and my-neteye-03

  • (E) is my-neteye-04

  • (V) is my-neteye-05

Note

Please see The Elected NetEye Master for a discussion about the Cluster Master Node.

Running the Upgrade

Note

Recall that the upgrade process will be stopped to allow you to provide tags for registration to Red Hat and Red Hat Insights. Refer to Section Breaking Changes for more information.

The Cluster Upgrade is carried out by running the command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

All the tasks carried out by the command are listed in section neteye upgrade; a dedicated section provides directions in case the command fails.

Warning

The neteye upgrade command can be run on a standard NetEye node, but in must be never issued on an Elastic-only (E) or a Voting-only (V) Node, because it would turn these nodes into NetEye Nodes.

Special Nodes

In the context of the Upgrade procedure, special nodes are Elastic-only (E) and Voting-only (V) Nodes. They do not need to be upgraded manually, because the neteye upgrade command will automatically take care of upgrading them.

Additional Tasks

In this upgrade, no additional manual step is required.

Cluster Reactivation

You can now restore the cluster to high availability operation.

  • Bring all cluster nodes back out of standby with this command on the last node (N):

    # pcs node unstandby --all --wait=300
    # echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been not reactivated, so please be sure that all nodes are active before proceeding.

  • Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.

  • Re-enable fencing on the last node (N), if it was enable prior to the upgrade:

    # pcs property set stonith-enabled=true