User Guide

Updating CentOS and NetEye

Please read carefully the following guide before actually starting the update procedure, to make sure you understand all the necessary steps.

NetEye installations can be updated by using the command neteye update.

The command will carry out following tasks:

  • Check NetEye’s health status

  • Disable fencing (NetEye Clusters only)

  • Put all nodes into standby except the one on which the command is executed (NetEye Clusters only) so that they are no longer able to host cluster resources

  • Install the available updates for both the operating system and NetEye

Warning

The neteye update command may take a long time before it completes successfully, so please do not interrupt it until it exits.

When executed on a cluster, neteye update will neither bring the nodes back from the standby, nor restore stonith: these steps need to be manually carried out after the update has been successfully completed.

CentOS Operating System Updates

Operating system updates often address security vulnerabilities that have been recently discovered or not previously disclosed. If operating system updates are not installed in a timely manner, you run the risk of unauthorized access as well as the theft or destruction of personal and/or confidential data.

NetEye base OS packages published in the official public repository are updated on a regular weekly basis. Indeed, the latest updates available of the current minor CentOS release are fetched and tested within Würth Phoenix testing area; after a week of successful testing, they are released to the public. Also, the published NetEye ISO is updated during this regular weekly process.

CentOS minor upgrades are delivered after an extended testing phase during the release cycle currently in progress. If the testing phase is successful, the CentOS minor upgrade is published on the repo for the current minor release. Also, the NetEye ISO is updated accordingly.

Additional information about CentOS versioning is available in the official documentation.

Update a NetEye Single Instance

The Update procedure is carried out by running the command:

# nohup neteye update

As mentioned earlier in this Document, the neteye update command checks the NetEye’s health status and install the available updates for both the operating system and NetEye.

If the command is successful, a message will inform you that it is possible to continue the update procedure, by checking for any .rpmnew and .rpmsave files (see the dedicated section for further information).

Next, re-initialize all modules using the command neteye_secure_install.

The secure install procedure, implemented within the neteye_secure_install script, is used after every installation, upgrade or update to:

  • Reconfigure NetEye services and/or migrate configurations and databases after important changes

  • Restart services that were stopped or modified

  • Create certificates for secure communication

Before making any changes, the secure install script will also run a subset of light and deep health checks to ensure that NetEye will not be adversely affected due to a transient problem like low disk space or custom configurations. Note that this should not replace running a separate, manual deep health check both before and after you upgrade or update.

The script should be run immediately after the installation or update of any new RPM packages from NetEye repositories. To run it, just type in the name of the script in a shell as root:

# neteye_secure_install

Finally, ensure that any potentially stopped and/or newly installed NetEye services are running:

# neteye start

Update a NetEye Cluster

Prerequisites

Updating a cluster will take a nontrivial amount of time, however no downtime needs to be planned. During the cluster update, individual nodes will be put into standby mode and so overall cluster performance will be degraded until the update procedure is completed and all nodes are removed from standby mode.

An estimate for the time needed to update a healthy cluster without problems is approximately 10 minutes plus 5 minutes per node. So for instance on an 3-node cluster it may take approximately 25 minutes (10 + 5*3). This estimate is a lower bound that does not include additional time should there be a kernel update or if you have additional modules installed.

This user guide uses the following conventions to highlight in which node you should execute the process:

  • (ALL) is the set of all cluster nodes

  • (N) indicates the last node

  • (OTHER) is the set of all nodes excluding (N)

For example if (ALL) is neteye01.wp, neteye02.wp, and neteye03.wp then:

  • (N) is neteye03.wp

  • (OTHER) is neteye01.wp and neteye02.wp

The order in which (OTHER) nodes are updated is not important. However, you should note that the last node (N) to be updated will require a slightly different process than the other nodes (Post Update Steps On (OTHER) nodes for details).

It is critical that the versions of the Linux kernel and drbd match. If an update would cause the versions to be out of alignment, you should not update or upgrade your cluster. You can find the currently installed and the available package versions by running the following two commands, then checking that the version numbers reported in the line Installed Packages match those reported on the last line of the Available Packages column.

# yum list kernel --show-duplicates
... 3.10.0-1160.31.1.el7 ...
# yum list kmod-drbd --show-duplicates
... 9.0.29_3.10.0_1160.31.1-1 ...

If yum reports that either the kernel or drbd has a newer version available, you need to check that after an update their versions will again be the same. For instance, in the example above, the lines for Available Packages for both packages contain the version string 3.10.0_1160.31.1: only if they match you can proceed with the update.

Checking That The Cluster Status Is Normal

At several points in the update process you will need to check that the cluster status is normal. When asked to check that the cluster status is normal, you should first start an SSH session on node (N).

Run the following cluster command:

# pcs status

and please ensure that:

  • Only the last node (N) MUST be active

  • All cluster resources are marked “Started” on the last node (N)

  • All cluster services under “Daemon Status” are marked active/enabled on the last node (N)

If any of the above checks fail, please call our service and support team before proceeding.

Update RPMs On All Cluster Nodes

The Cluster RPMs update is carried out by running the command:

# nohup neteye update

Warning

The neteye update command can be run on a standard NetEye node, but never on an Elastic-only or a Voting-only Node.

If the command is successful, a message will inform you that it is possible to continue the cluster upgrade procedure.

We recall that the neteye update command will not:

  • bring all cluster nodes back out of standby

  • restore stonith

Therefore, remember to execute it manually when necessary, that is, in step Cluster Reactivation (N) below.

Update All Cluster Nodes (ALL)

Repeat these update steps for all nodes (ALL).

#1 Check cluster status

Run the following cluster command:

# pcs status

and please ensure that:

  • Only the last node (N) MUST be active

  • All cluster resources are marked “Started” on the last node (N)

  • All cluster services under “Daemon Status” are marked active/enabled on the last node (N)

#2 Check DRBD status

Check if the DRBD status is ok by using the drbdmon command, which updates the DRBD status in real time.

See also

Section 4.2 of DRBD’s official documentation contains information and details about the possible statuses.

https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-check-status

#3 Migrate configuration of RPMs

Each upgraded package can potentially create .rpmsave and/or .rpmnew files. You will need to verify and migrate all such files.

You can find more detailed information about what those files are and why they are generated in the official RPM documentation.

Briefly, if a configuration file has changed since the last version, and the configuration file was edited since the last version, then the package manager will do one of these two things:

  • If the new system configuration file should replace the edited version, it will save the old edited version as an .rpmsave file and install the new system configuration file.

  • If the new system configuration file should not replace the edited version, it will leave the edited version alone and save the new system configuration file as an .rpmnew file.

Note

You can use the following commands to locate .rpmsave and .rpmnew files:

# updatedb
# locate *.rpmsave*
# locate *.rpmnew*

The instructions below will show you how to keep your customized operating system configurations.

How to Migrate an .rpmnew Configuration File

The update process creates an .rpmnew file if a configuration file has changed since the last version so that customized settings are not replaced automatically. Those customizations need to be migrated into the new .rpmnew configuration file in order to activate the new configuration settings from the new package, while maintaining the previous customized settings. The following procedure uses Elasticsearch as an example.

First, run a diff between the original file and the .rpmnew file:

# diff -uN /etc/sysconfig/elasticsearch /etc/sysconfig/elasticsearch.rpmnew

OR

# vimdiff /etc/sysconfig/elasticsearch /etc/sysconfig/elasticsearch.rpmnew

Copy all custom settings from the original into the .rpmnew file. Then create a backup of the original file:

# cp /etc/sysconfig/elasticsearch /etc/sysconfig/elasticsearch.01012018.bak

And then substitute the original file with the .rpmnew:

# mv /etc/sysconfig/elasticsearch.rpmnew /etc/sysconfig/elasticsearch

How to Migrate an .rpmsave Configuration File

The update process creates an .rpmsave file if a configuration file has been changed in the past and the updater has automatically replaced customized settings to activate new configurations immediately. In order to preserve your customizations from the previous version, you will need to migrate those from the original .rpmsave into the new configuration file.

Run a diff between the new file and the .rpmsave file:

# diff -uN /etc/sysconfig/elasticsearch.rpmsave /etc/sysconfig/elasticsearch

OR

# vimdiff /etc/sysconfig/elasticsearch.rpmsave /etc/sysconfig/elasticsearch

Copy all custom settings from the .rpmsave into the new configuration file, and preserve the original .rpmsave file under a different name:

# mv /etc/sysconfig/elasticsearch.rpmsave /etc/sysconfig/elasticsearch.01012018.bak

Post Update Steps On (OTHER) nodes

Run the NetEye Secure Install on (OTHER) nodes but wait for the successful execution of the NetEye Secure Install before running it on another node:

# nohup neteye_secure_install

Post Update Steps On The Elastic-only, Voting-only Nodes

Run the NetEye Secure Install on the Elastic-only and/or the Voting-only nodes:

# nohup neteye_secure_install

Post Update Steps For The Last Node (N)

Run the NetEye Secure Install on the last node (N):

# nohup neteye_secure_install

Now, if a new kernel was installed on the last node (N), we are supposed to reboot, but this is the active node and cannot be rebooted now. So, run the following command to remove all the nodes from standby mode:

# pcs node unstandby --all --wait=300
# echo $?
0

If the exit code is different from 0, some node may not have been reactivated, so please be sure that all nodes are active before proceeding.

Finally, run the following command so that the current node is no longer able to host cluster resources:

# pcs node standby --wait=300
# echo $?
0

If the exit code is different from 0, the current node is not yet in standby, so please be sure that the current node is in standby before proceeding.

Final tasks

  • Please ensure that only the last node is in standby and all the other are Online.

  • Now, you can reboot the current node.

Cluster Reactivation

You can now restore the cluster to high availability operation.

  • Run the following command to remove all nodes from standby mode (it doesn’t matter which node this command is run on):

    # pcs node unstandby --all
    
  • Please ensure your cluster is healthy by checking the standard procedure described in the section Checking That The Cluster Status Is Normal.

  • If you previously disabled stonith above to disable fencing, re-enable it:

    # pcs property set stonith-enabled=true
    

Update NetEye Satellites

To update a Satellite it is required to have the configuration archive located in /root/satellite-setup/config/<neteye_release>/satellite-config.tar.gz.

To automatically download the latest update you can run the following command on the Satellite:

neteye satellite update

The command will download and install the latest version of both Operating System packages and NetEye stable packages.

Please check for any .rpmnew and .rpmsave files (see the Migrate RPM Configuration section for further information).

If the command is successful, a message will inform you that it is possible to continue the update procedure.

Execute the command below to setup the Satellite with the new updates:

neteye satellite setup