User Guide

Cluster Upgrade from 4.29 to 4.30

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.29 to 4.30.

During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.30 is possible only from 4.29; for example, if you have version 4.21, you must first upgrade to the 4.22, then 4.23, and so on.

Starting from 4.30, as defined in Configuration of Tenants, for every newly created Tenant a new role will appear in the Roles section of the NetEye Access Control. By default a new role automatically acquires the title following the syntax: neteye_tenant_<tenant_name>, and inherits Module permission settings of the Tenant it was created for.

In case any tenants have been added prior to the upgrade to 4.30, after the upgrade procedure described below is run a new role will be created for each tenant, including the Master Tenant.

Breaking Changes

Migration of Tenants’ Grafana Organizations

In NetEye 4.29, we introduced a “feature flag”, which ensures that the prerequisites for the upgrade to NetEye 4.30 are satisfied.

Starting from NetEye 4.30, each Tenant will have its own Grafana Organization. The upgrade command will create a new Grafana Organization for each Tenant. The name of the Grafana Organization will correspond to the Tenant’s display-name.

If you already created a Grafana Organization for some of your Tenants, you should specify it in the Tenants’ configuration, which allows to skip the creation of a new Organization for those Tenant and reuse the existing ones. For this purpose, the neteye tenant config modify command can be used as follows:

neteye tenant config modify TENANT_NAME \
--custom-override-grafana-org CUSTOM_GRAFANA_ORG_NAME

Note

Both in single Tenant and multi Tenant environments we suggest to have a Grafana Organization dedicated to the Master Tenant, while the Main Org. should contain only system-level objects. So you should not link the Master Tenant to the Main Org..

Also note that the specified Grafana Organization must exist when running the above script.

After modifying the Tenants configurations, the feature flag can be enabled via Configuration / Modules / analytics / Configuration. Regardless of wheater you modified any Tenant configuration or not, the feature flag must be enabled to proceed with the upgrade.

Warning

By enabling the feature flag, you declare that you modified the configuration of the Tenants that already have a Grafana Organization associated, by executing the neteye tenant config modify with the custom-override-grafana-org parameter.

Alyvix Nodes Architectures

In NetEye 4.30, we introduced the concept of Alyvix nodes architectures.

During the upgrade from NetEye 4.29 to NetEye 4.30, every host marked as Alyvix node in the Director will be converted into an Alyvix single Tenant node which communicates directly with the NetEye Master. Please note that after the upgrade, a deploy of the Director is needed for the changes to take effect.

If any of the Alyvix nodes listed in the Director should be marked as a different Alyvix node, please apply the correct setting as explained in Create an Alyvix Node once the upgrade to NetEye 4.30 has finished.

Furthermore, in case you migrate some Alyvix node to a Alyvix Tenant shared node in a Multitenant environment, in order to collect its performance metrics, follow the instructions on how to assign a Tenant to the node’s sessions. Please note that the possibility to use the certificates stored in /neteye/shared/nats-server/conf/certs/master-alyvix_metrics_wo.crt.pem is now deprecated and the support for those certificates will be removed in the future.

Enabling Modules for Tenants

From 4.30, NetEye allows to enable NetEye Feature Modules at Tenant-level, in such a way that you can enable a Feature Module for one of your Tenants, but not for the others.

During the upgrade to NetEye 4.30, the list of enabled Feature Modules in the configuration of existing Tenants is left empty. For this reason, after completing the upgrade to NetEye 4.30, for each Tenant please enable all the Feature Modules used by it.

Note

Feature Modules are to be enabled on each Tenant including Master Tenant, if needed. There are no Tenants where Feature Modules are enabled by default.

For example if Tenant tenantA uses the Feature Modules neteye-alyvix and neteye-asset, after completing the upgrade to 4.30 you should enable those modules with:

neteye tenant config modify tenantA \
--enable-module neteye-alyvix --enable-module neteye-asset

For more information refer to neteye tenant config create.

Note that if you do not enable a module for a Tenant, some features of that module will not be available to that Tenant.

Prerequisites

Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.

  1. All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.

  2. NetEye must be up and running in a healthy state.

  3. Disk Space required:

    • 3GB for / and /var

    • 150MB for /boot

1. Run the Upgrade

The Cluster Upgrade is carried out by running the following command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

After the command was executed, the output will inform if the upgrade was successful or not:

  • In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.

  • In case the command fails refer to the troubleshooting section.

2. Reboot Nodes

Restart each node, one at a time, to apply the upgrades correctly.

  1. Run the reboot command

    cluster-node-N# neteye node reboot
    
  2. In case of a standard NetEye node, put it back online once the reboot is finished

    cluster-node-N# pcs node unstandby --wait=300
    

You can now reboot the next node.

3. Cluster Reactivation

At this point you can proceed to restore the cluster to high availability operation.

  1. Bring all cluster nodes back out of standby with this command on the last standard node

    cluster# pcs node unstandby --all --wait=300
    cluster# echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.

  2. Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.

  3. Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:

    cluster# pcs property set stonith-enabled=true
    

4. Additional Tasks

In this upgrade, no additional manual step is required.