Cluster Upgrade from 4.41 to 4.42¶
This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.41 to 4.42.
During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.
Warning
Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.42 is possible only from 4.41; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.
Breaking Changes¶
Prerequisites¶
Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.
All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.
NetEye must be up and running in a healthy state.
Disk Space required:
3GB for
/
and/var
150MB for
/boot
If the SIEM module is installed:
The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.
A backup of MariaDB must be performed before starting the upgrade. After the backup is taken, please confirm the operation was done by setting the corresponding flag under
To ensure the full compatibility of your configuration with MariaDB 10.11, please run the neteye node check-upgrade-prerequisites on the node where MariaDB is running, as mentioned also here.
Fleet and APM Certificates location change¶
Beginning with Neteye 4.42, users who have installed the
SIEM feature module should be aware of a relocation of the
external Fleet and APM certificates. Previously located at
/neteye/local/elastic-agent/conf/fleet/certs/
, these certificates have
been moved to /neteye/shared/nginx/conf/certs/
. This modification
facilitates centralized certificate management, given that the Nginx service
exclusively utilizes these certificates for communication with external agents.
Prior to this version, the decentralized storage of certificates in the local
directories of operational nodes potentially resulted in inconsistencies across
the cluster infrastructure. The upgrade process will automatically address this
change and ensure the uniformity of certificates and keys, specifically those
located at
/netye/local/elastic-agent/conf/fleet/certs/fleet-server-external.crt, apm-server-external.crt, private/fleet-server-external.key, private/apm-server-external.key
,
across all operational nodes before starting the upgrade process.
Note
This will be automatically checked during the upgrade prerequisites checks, but can be manually verified by running the following command: neteye node check-upgrade-prerequisites.
1. Run the Upgrade¶
The Cluster Upgrade is carried out by running the following command:
cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out
Warning
If the SIEM feature module is installed and a new version of Elasticsearch is available, please note that the procedure will upgrade one node at the time and wait for the Elasticsearch cluster health status to turn green before proceeding with the next node. For more information, please consult the dedicated section.
After the command was executed, the output will inform if the upgrade was successful or not:
In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.
In case the command fails refer to the troubleshooting section.
2. Reboot Nodes¶
Restart each node, one at a time, to apply the upgrades correctly.
Run the reboot command
cluster-node-N# neteye node reboot
In case of a standard NetEye node, put it back online once the reboot is finished
cluster-node-N# pcs node unstandby --wait=300
You can now reboot the next node.
3. Cluster Reactivation¶
At this point you can proceed to restore the cluster to high availability operation.
Bring all cluster nodes back out of standby with this command on the last standard node
cluster# pcs node unstandby --all --wait=300 cluster# echo $?
0
If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.
Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.
Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:
cluster# pcs property set stonith-enabled=true
4. Additional Tasks¶
Enabling of the Content-Security-Policy (CSP) header¶
With the upgrade to IcingaWeb2 2.12.x we allow you to enable the Content-Security-Policy (CSP) header for the NetEye web interface. The CSP header helps protect your NetEye installation from cross-site scripting (XSS) attacks. The toggle for this can be found in the NetEye web interface under Configuration > Application > Enable strict content security policy.
Warning
The strict CSP header feature must be enabled before upgrading to the next release of NetEye (4.43).
IcingaWeb2 modules migration¶
Please make sure that all third-party icingaweb2 modules installed on the system are compatible with the CSP policies before enabling this feature to avoid any issues.
To do it you can follow the steps below:
Check and in case fix/upgrade a module.
Enable the strict CSP flag from the IcingaWeb2 settings.
Verify if everything is working.
In case of problems:
Disable the flag.
Fix the incompatibilities.
Repeat from step 1.
Repeat the process for the next module.
SLM report styling under the Content-Security-Policy (CSP) header¶
With the introduction of the CSP headers, we had to make minor breaking changes to the way styling is handled in the SLM reports. SLM reports can no longer use the inline style attributes. If any were used in a custom template, they now need to be migrated to the html style elements and applied with classes. Furthermore, if any html style elements are used in the custom template, they will now need to be updated to include the icingaweb2 style nonce to be compliant with the CSP header. Please refer to the appropriate section in the customizing templates section of the documentation for more information.