Single Node Upgrade from 4.42 to 4.43¶
This guide will lead you through the steps specific for upgrading a NetEye Single Node installation from version 4.42 to 4.43.
Upgrading a NetEye Single Node takes a nontrivial amount of time. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes.
Warning
Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.43 is possible only from 4.42; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.
Breaking Changes¶
Grafana upgrade to 12.0.2¶
Breaking changes
Data source UID format enforcement The UID format for data sources has been changed to enforce a stricter format. This may affect existing data sources that do not comply with the new format. Please refer to the Grafana 12.0 release notes for more details. To avoid issues, we added an upgrade NetEye prerequisite that prevents the upgrade if any data source UID does not comply with the new format.
Plugins compatibility Some plugins may not be compatible with Grafana 12.0.2. Please check the compatibility of your plugins before upgrading. For more information, see the Grafana Plugins page, where you can find the compatibility information for each plugin.
Kibana Multi-Instance¶
During the upgrade to NetEye 4.43, Kibana will be moved from a PCS-managed resources to the distributed multi-instance architecture. This improvement will not only enhance the reliability of the kibana service on NetEye clusters, but also have a more efficient workload distribution across the nodes. For system consistency, the Kibana migration will be performed also on NetEye single-node installations, where Kibana will have infrastructure and security improvements.
Directory Location Change¶
With the migration to Kibana multi-instance, the Kibana directory has been moved from
/neteye/shared/kibana
to /neteye/local/kibana
. This is a breaking change that affects
any custom backup scripts targeting the old location, monitoring configurations checking the old path,
and custom integrations or automations accessing logs/configuration in the old location.
If you have any scripts, monitoring, or configurations that reference the old path, please update them to use the new location to ensure proper functionality after the upgrade.
Kibana Configuration File Change¶
With the new Kibana multi-instance architecture, the configuration file has been moved from the old file
kibana/conf/kibana.yml
to a file dedicated for any user customization in /neteye/local/kibana/conf/kibana_user_customization.yml
.
The original configuration file kibana/conf/kibana.yml
is now managed by NetEye and should not be modified directly.
Additionally, since the configuration applied to the kibana_user_customization.yml
file will be applied to all Kibana instances
in the cluster, an additional configuration file /neteye/local/kibana/conf/kibana_local_customization.yml
has been introduced
to allow for local customizations that can be different on each node.
For more information on how to configure Kibana, please refer to the Architecture section.
Kibana systemd service renaming¶
The systemd service for Kibana has been renamed from kibana-logmanager.service
to kibana.service
.
This means that any custom scripts, monitoring tools, or systemd configurations that reference the old kibana-logmanager.service
service will need to be updated to use the new service name kibana.service
.
Keep in mind that kibana.service
service will run on all the configured nodes for Kibana and not
anymore on a single node as it was the case with the previous kibana-logmanager.service
.
Kibana volume¶
For the new Kibana multi-instance architecture, the Kibana DRBD volume will be removed. Although all the content of the
Kibana volume will be migrated to the new location /neteye/local/kibana
, it is important to note that the logical
volume will not be created for the /neteye/local/kibana
directory and it will be mounted instead on the /neteye
LV.
If you still want to use a custom logical volume for Kibana, you can create it manually and mount it to the new path /neteye/local/kibana
.
Kibana port on local interface¶
To prepare for the Kibana multi-instance migration, ensure that the port 5602 is free and available on all the NetEye nodes interface, both in single-node and cluster installations.
Prerequisites¶
Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.
All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.
NetEye must be up and running in a healthy state.
Disk Space required:
3GB for
/
and/var
150MB for
/boot
If the NetEye Elastic Stack module is installed:
The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.
1. Run the Upgrade¶
To perform the upgrade, run from the command line the following command:
neteye# (nohup neteye upgrade &) && tail --retry -f nohup.out
After the command was executed, the output will inform if the upgrade was successful or not:
In case of successful upgrade you might need to restart NetEye to properly apply the upgrades. If the reboot is not needed, please skip the next step.
In case the command fails refer to the troubleshooting section.
2. Reboot¶
Restart NetEye to apply the upgrades correctly.
neteye# neteye node reboot
3. Additional Tasks¶
Migration to NetworkManager¶
Starting from NetEye 4.44 the deprecated network-scripts will be removed in favour of NetworkManager. Before upgrading to NetEye 4.44 you will be required to migrate the network configuration to the NetworkManager.
To carry this out, a new command specific for this purpose is available: neteye node network migrate. This command will migrate the network configuration from old network-scripts to NetworkManager.
Note
The future upgrade procedure will expect the NetworkManager.service systemd service to be enabled and running, and the network.service to be disabled and stopped. If your system already meets these requirements, you can skip the migration step.
Warning
You can decide to carry out the migration manually or to use the provided command. If the latter is used, it’s highly recommended to check out the dedicated command documentation for more information on how safely to do it.