User Guide

Troubleshooting

The Update and Upgrade procedures can stop for disparate reasons. This section collects the most frequents cases and provide some guidelines to resolve the issue and continue the procedures.

If you find a problem that is not covered in this page, please refer to the official channels: sales, consultant or support portal for help and directions on how to proceed.

Some check fails

In this case, an informative message will point out the check that failed, allowing to inspect and fix the problem.

For example, if the exit message is similar to the following one, you need to manually install the latest updates.

"Found updates not installed"
"Example: icingacli, version 2.8.2_neteye1.82.1"

Then, after the updates are installed, you can run it again and the command will start over the tasks.

An .rpmnew and/or .rpmsave file is found

This can happen in presence of a customisation in some of the installed packages. Check section Migrate .rpmsave and .rpmnew Files for directions on how to proceed. Once done, remember to run neteye update again.

A cluster resource has not been created

During a NetEye Cluster upgrade, it can happen that there is the need of creating new cluster resources before running the neteye_secure_install script. Creation of a resource must be done manually, and directions can be found in section Additional Tasks of the Cluster Upgrade from 4.22 to 4.23.

An health check is failing

…during the update/upgrade procedure

The NetEye update or upgrade commands run all the deep health checks to ensure that the NetEye installation is healthy before running the update or upgrade procedure. It might happen, however, that one of the check fail, thus preventing the procedures to complete successfully.

Hence, to manually solve the problem you should follow the directions that can be found in section The NetEye Health Check.

Once the issue is solved, the NetEye update/upgrade commands can be run again.

…after the finalization procedure

After the finalization procedure has successfully ended, you might notice in the Problems View (see Menu / Problems) that some health check fails and is in state WARNING. The reason is that you are using some module that needs to be migrated, because some breaking change has been introduced in the release.

Hence, you should go to the Problems View and check which health check is failing. There you will also find instructions for the correct migration of the module, which is in almost all cases amounts to enabling an option: the actual migration will then be executed manually.

How to check the NetEye Cluster status

Warning

This command must not be used during a NetEye Cluster Upgrade from version 4.22 to version 4.23, but only at the end, during the Cluster Reactivation step.

Run the following cluster command:

# pcs status

and please ensure that:

  1. Only the last (NN) node MUST be active

  2. All cluster resources are marked “Started” on the last (NN) node

  3. All cluster services under “Daemon Status” are marked active/enabled on the last (NN) node

Note

Requirement 1. does not apply during the Cluster Upgrade procedure.

How to check DRBD status

Check if the DRBD status is ok by using the drbdmon command, which updates the DRBD status in real time.

See also

Section 4.2 of DRBD’s official documentation contains information and details about the possible statuses.

https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-check-status

Incompatible RHEL 8 kernel modules

Some kernel modules are incompatible or unsupported on RHEL 8. For the LSI Logic SAS driver (mptsas) and LSI Logic Parallel driver (mptspi) kernel modules, please refer to the following procedure. In all other cases, please refer to the official channels: sales, consultant or support portal for directions on how to proceed.

If your VMware managed Virtual Machine uses either mptsas or mptspi, you will be asked to migrate to the supported VMware Paravirtual SCSI controller before proceeding with the OS upgrade. Otherwise, if such drivers are detected on a physical machine or the VM is managed by a different hypervisor, please refer to the official channels previously reported.

To perform this change, please follow this procedure:

  1. Connect to the terminal of the VM that uses the unsupported driver

  2. Execute the command:

    # dracut -N -f
    
  3. Shut down the VM

  4. Edit the settings of the VM from the vCenter in the following way:

    1. Click SCSI controller 0

    2. Change the controller type to VMware Paravirtual

    3. Click OK to save the changes

  5. Power on the VM

  6. Once the VM finished booting, connect to the VM and execute the command:

    # dracut -f --regenerate-all
    

Once this procedure is finished the VM will not use the unsupported SCSI drivers anymore and you can proceed with the upgrade.