User Guide

Cluster Upgrade from 4.25 to 4.26

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.25 to 4.26.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.26 is possible only from 4.25; for example, if you have version 4.21, you must first upgrade to the 4.22, then 4.23, and so on.

Before starting an upgrade, you should very carefully read the latest release notes on NetEye’s blog and check the feature changes and deprecations specific to the version being upgraded. You should check also the whole section Breaking Changes below.

The remainder of this section is organised as follows. Section Breaking Changes introduces substantial changes that users must be aware of before starting the upgrade procedure and may require to carry out some tasks before starting the upgrade; section Prerequisites provide information to be known before starting the upgrade procedure; section Conventions Used defines some notation used in this procedure; section NetEye Single Node Upgrade Procedure presents the actual procedure, including directions for special nodes; section Cluster Reactivation instructs on how to bring the NetEye Cluster back to complete functionality, and finally section Additional Tasks shows which tasks must be executed after the upgrade procedure has been successfully executed.

Breaking Changes

OCS Inventory NG Plugin for GLPI

Custom OCS Servers

NetEye 4.26 upgrades the OCS Inventory NG plugin for GLPI to version 1.7.3. The new version of the plugin changes the encryption method used to store the password used to connect to the OCS Inventory database. This means that the passwords stored before the upgrade will fail to be loaded once NetEye is upgraded to version 4.26.

NetEye 4.26 automatically resets the passwords for the db user glpi_ocsinventoryng, in such a way that all OCS Servers authenticated by the user glpi_ocsinventoryng will continue working after the upgrade.

If inside OCS Inventory NG plugin for GLPI you configured instead an OCS Servers that authenticates with a db user different than glpi_ocsinventoryng, then the plugin will stop working due to its inability to decrypt the passwords. In such cases, please refer to the official channels: sales, consultants, or support portal.

OCS Inventory Server

Database Schema Changes: memories.CAPACITY migration

NetEye 4.26 updates the schema of the OCS MySQL database. One of those updates is applied to the CAPACITY column of the memories table. Prior to the upgrade, string values (VARCHAR(255)) could be stored in CAPACITY column, while after the upgrade the values are restricted to integers only.

In some cases the OCS agents still send string values like < 1 or No.

We strongly suggest to make sure the CAPACITY column does not contain values other than integers and perform backup of the table. During the upgrade, the neteye_secure_install command will replace the string values with the NULL value. Then we suggest to check if the agents that sent the string values continue to send data correctly. Otherwise, they should be updated.

Database Credentials: rpmnew migration

NetEye 4.26 also updates OCS modules to version 2.10.0. The update produces the two following rpmnew files, related to ocsinventory-server httpd configurations:

  • /etc/httpd/conf.d/ocsinventory-server-restapi.conf.rpmnew

  • /etc/httpd/conf.d/ocsinventory-server.conf.rpmnew

To avoid producing these rpmnew files in the future, we changed the way in which credentials are passed to the server’s configuration file, adopting an environment variable stored in the /neteye/shared/httpd/conf/sysconfig/ocsinventory-credentials sysconfig file.

For this reason, the database credentials needs to be migrated manually, along with the rpmnew files. The migration can be performed according to the procedure below.

Note

On a NetEye Cluster, the neteye upgrade will terminate with an error message, indicating on which node the rpmnew was generated.

  1. On the node where the rpmnew was generated, retrieve the current password used by the database user from the /etc/httpd/conf.d/ocsinventory-server-restapi.conf file, such as:

    $ENV{OCS_DB_LOCAL} = 'ocsweb';
    $ENV{OCS_DB_USER} = 'ocsserver';
    $ENV{OCS_DB_PWD} = 'ocsserver_user_password';
    $ENV{OCS_DB_SSL_ENABLED} = 0;
    
  2. On the The NetEye Active Node node, copy the environment variable template file, removing the tpl extension:

    cluster-node(not-standby)# cp /neteye/shared/httpd/conf/sysconfig/ocsinventory-credentials.tpl /neteye/shared/httpd/conf/sysconfig/ocsinventory-credentials
    
  3. On the The NetEye Active Node node, add the password retrieved during Step 1 to the environment file we just created from the template. In our example this will result in:

    #####################################
    # OCS Inventory credentials sysconfig file
    ####################################
    # This file contains the password used by the ocsserver user
    # to access the ocsweb DB.
    OCS_DB_PWD=ocsserver_user_password
    
  4. On the node where the rpmnew was generated, migrate it keeping the newly introduced environment variable as a password (see the Migrate RPM Configuration section for further information)

Prerequisites

Upgrading a NetEye Cluster will take a nontrivial amount of time. During the upgrade, individual nodes will be put into standby mode and so overall performance will be degraded until the upgrade procedure is completed and all nodes are removed from standby mode.

An estimate for the time needed for a full upgrade (update + upgrade) when the cluster is healthy, there is no additional NetEye modules installed, and the procedure is successful is approximately 30 minutes, plus 15 minutes per node.

So for instance on a 3-node cluster it may take approximately 1 hour and 15 minutes (30 + 15*3).

Warning

This estimate does not include the time required to download the packages and for the manual intervention: migration of configurations due to breaking changes, failure of tasks during the execution of the neteye update and neteye upgrade commands.

Conventions Used

A NetEye cluster can be composed by different types of nodes, including Elastic-only and Voting-only nodes, which require a different upgrade procedure. Therefore, the following notation has been devised, to identify nodes in the cluster.

  • (ALL) is the set of all cluster nodes

  • (N) indicates the NetEye Active node of the Cluster

  • (E) is an Elastic-only node

  • (V) is a Voting-only node

  • (OTHER) is the set of all nodes excluding (N), (E), and (V)

For example if we take the sample cluster defined in The NetEye Active Node, (ALL) is my-neteye-01, my-neteye-02, my-neteye-03, my-neteye-04, and my-neteye-05.

  • (N) is my-neteye-01

  • (OTHER) is composed by my-neteye-02 and my-neteye-03

  • (E) is my-neteye-04

  • (V) is my-neteye-05

Note

Please see The NetEye Active Node for a discussion about the NetEye Active node.

Running the Upgrade

The Cluster Upgrade is carried out by running the command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

All the tasks carried out by the command are listed in section neteye upgrade; a dedicated section provides directions in case the command fails.

Warning

The neteye upgrade command can be run on a standard NetEye node, but in must be never issued on an Elastic-only (E) or a Voting-only (V) Node, because it would turn these nodes into NetEye Nodes.

Special Nodes

In the context of the Upgrade procedure, special nodes are Elastic-only (E) and Voting-only (V) Nodes. They do not need to be upgraded manually, because the neteye upgrade command will automatically take care of upgrading them.

Additional Tasks

In this upgrade, no additional manual step is required.

Cluster Reactivation

You can now restore the cluster to high availability operation.

  • Bring all cluster nodes back out of standby with this command on the last node (N):

    # pcs node unstandby --all --wait=300
    # echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been not reactivated, so please be sure that all nodes are active before proceeding.

  • Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please call our service and support team before proceeding.

  • Re-enable fencing on the last node (N), if it was enable prior to the upgrade:

    # pcs property set stonith-enabled=true