Cluster Upgrade from 4.33 to 4.34¶
This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.33 to 4.34.
During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.
Warning
Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.34 is possible only from 4.33; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.
Breaking Changes¶
Upgrade of Elastic Stack to version 8.11.3¶
In NetEye 4.34, we upgraded the Elastic Stack from version 8.10.2 to version 8.11.3.
To ensure the full compatibility of your installation, please review the official release notes, focusing in particular on the breaking changes of the Elastic Stack components: Elasticsearch, Logstash, Kibana and Beats.
Please note that, for a complete review, the release notes of all versions between 8.10.2 and 8.11.3 should be reviewed.
Prerequisites¶
Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.
All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.
NetEye must be up and running in a healthy state.
Disk Space required:
3GB for
/
and/var
150MB for
/boot
If the SIEM module is installed:
The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.
The port TCP 5045 should be open to Logstash as input for Elastic Agent incoming logs
In case before NetEye 4.33 custom Fleet policies were applied to the Elastic Agents running on the NetEye Master nodes, remember to complete the migration of the policies to NetEye official ones (see the 4.33 upgrade guide), since the upgrade to NetEye 4.34 will enforce the usage of the NetEye official policies if those haven’t been applied.
1. Run the Upgrade¶
The Cluster Upgrade is carried out by running the following command:
cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out
Warning
If the SIEM feature module is installed and a new version of Elasticsearch is available, please note that the procedure will upgrade one node at the time and wait for the Elasticsearch cluster health status to turn green before proceeding with the next node. For more information, please consult the dedicated section.
After the command was executed, the output will inform if the upgrade was successful or not:
In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.
In case the command fails refer to the troubleshooting section.
2. Reboot Nodes¶
Restart each node, one at a time, to apply the upgrades correctly.
Run the reboot command
cluster-node-N# neteye node reboot
In case of a standard NetEye node, put it back online once the reboot is finished
cluster-node-N# pcs node unstandby --wait=300
You can now reboot the next node.
3. Cluster Reactivation¶
At this point you can proceed to restore the cluster to high availability operation.
Bring all cluster nodes back out of standby with this command on the last standard node
cluster# pcs node unstandby --all --wait=300 cluster# echo $?
0
If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.
Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.
Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:
cluster# pcs property set stonith-enabled=true