The neteye
Command¶
The neteye CLI command is used from the CLI to carry out a few tasks related to the NetEye Installations, both Single Nodes and Cluster. Various sub-commands are available and are analysed in this section.
All output from each command execution is saved in a log file at /neteye/local/os/log/neteye_command/
.
A retention policy is applied to these files on a daily basis as follows:
Files are compressed
In case the size of the logs exceeds the configured maximum, the oldest files are deleted. This value is by default 500MB.
Files older than 2 years are deleted
You can configure the retention policy in /neteye/local/os/conf/logscleaner.d/neteye-command-log.toml
neteye status
¶
neteye status is used to list the NetEye services and their status, either UP or DOWN.
neteye start | stop
¶
The neteye start and neteye stop command are used to start or stop all NetEye services at once.
neteye update
¶
The neteye update command runs a number of tasks as listed below in order of execution.
Task |
Order |
Single Node |
Cluster |
Description |
---|---|---|---|---|
Health checks |
#1 |
yes |
yes |
Carry out health checks to verify that NetEye installation is healthy and eligible for update |
Standby and fencing |
#2 |
no |
yes |
puts all nodes in standby except the NetEye Active Node, and disables fencing, if enabled |
Update RPM |
#3 |
yes |
yes |
Installs all RPM updates (bugfixes) for the current version |
|
#4 |
yes |
yes |
Searches for
any |
Secure install |
#5 |
yes |
yes |
Execute |
If any of these tasks is unsuccessful, a message will explain where the command failed, allowing you to manually fix the corresponding step, then launch again the neteye update command. Check also the Troubleshooting section for more information and directions about fixing the problems.
neteye upgrade
¶
The neteye upgrade is called after neteye update and carries out a number of tasks that differ when it is executed on a Single Node or on a Cluster node.
Warning
The neteye upgrade command may take a long time before it completes successfully, so please do not interrupt it until it exits.
The tasks carried out by the neteye upgrade command are listed below in order of execution. It is also mentioned if they run on Clusters or Single Nodes.
Single Nodes and Cluster
Task |
Order |
Single Node |
Cluster |
Description |
---|---|---|---|---|
Health checks |
#1 |
yes |
yes |
Carry out health checks to verify that NetEye installation is healthy and eligible for update |
Check update status |
#2 |
yes |
yes |
NetEye is fully updated and there are no minor (bugfix) updates to be installed, otherwise it will install the available updates |
Upgrade eligibility |
#3 |
yes |
yes |
Verify that NetEye is eligible for upgrade: it checks which is the installed version (e.g., 4.20) and that the last upgrade was finalized |
Standby and fencing |
#4 |
no |
yes |
puts all nodes in standby except the Active Node, and disables fencing, if enabled |
Active Node node check |
#5 |
no |
yes |
Make sure the Active Node is active (i.e. non in standby mode, please refer to section The NetEye Active Node below to understand which node is considered as the Active Node. |
Repo update |
#6 |
yes |
yes |
Update all the NetEye repositories to the next version to which it is possible to upgrade (e.g., 4.21) |
Packages check |
#7 |
yes |
yes |
Check for new software packages in the repositories |
Package install |
#8 |
yes |
yes |
Install new packages |
Yum groups install |
#9 |
yes |
yes |
Install new packages that belong to the NetEye yum groups |
|
#10 |
yes |
yes |
Searches for
any |
Finalise installation |
#11 |
yes |
yes |
The
|
If the neteye upgrade command is successful, a message will inform you that the upgrade procedure concludes successfully. Otherwise, if the commands breaks at some point, you need to fix the failed tasks manually and then launch again the command. Check also the Troubleshooting section for more information and directions about fixing the problems.
What neteye update
and neteye upgrade
do not do on Clusters¶
The following tasks are required to bring a cluster back to the correct operative status after an update or an upgrade and need to be carried out manually:
Unstandby nodes
Restore stonith on cluster
Additionally, the commands can not be launched on Elastic-only or a Voting-only nodes. Please note that, however, even if the two commands can be executed on operative nodes only, the update/upgrade procedure is performed also on Elastic-only and Voting-only nodes.
Moreover, the commands currently do not update/upgrade InfluxDB-only nodes. The maintenance of InfluxDB-only nodes should be performed on the user’s part.
neteye update
vs. neteye upgrade
¶
The main difference between the two commands is that the neteye update installs all available packages in the current version of NetEye. On the other side, neteye upgrade installs all available packages in next version of NetEye.
For example, given a NetEye version 4.20, neteye update fully updates NetEye 4.20 with the latest packages in the 4.20 repository, while neteye upgrade installs and configures all new packages available in the 4.21 repository.
neteye node
¶
The command neteye node is responsible for performing operations on the node on which it is executed such as updating the operating system.
neteye node system-upgrade
¶
The command neteye node system-upgrade executed on a specific node is responsible for the upgrade of NetEye from version 4.22 to 4.23 and also of the upgrade of the operating system from CentOS 7 to RHEL 8.
As described in the upgrade procedure it will upgrade the operating system from NetEye 4.22 on CentOS 7 to RHEL 7 and then to RHEL 8 with NetEye 4.23. After each change of operating system, a reboot is required. In the case of a cluster, the command must be executed node by node and before starting with a new node, the previous one must have finished the upgrade to RHEL 8.
This command does not carry out any task on versions 4.23 onwards.
neteye node register
¶
The command neteye node register registers the RHEL 8 subscription and sets up
Red Hat Insights. By default it uses the NetEye activation key. If the node is
registered with another organization, this behaviour can be changed by modifying
/neteye/local/os/conf/subscription-manager.toml
.
[subscription-manager]
enable_auto_rhel_subscription = false
If the setting enable_auto_rhel_subscription is set to false the Red Hat registration and the Red Hat Insights subscription are skipped.
This command will also be called during neteye_secure_install if it is not disabled.
neteye node reboot
¶
The command neteye node reboot helps to safely reboot the node. The command is to be run in case a node reboot is required during the upgrade/update procedure. It performs the reboot only if the target node is in standby and the deep health checks were successfully passed.
neteye tenant
¶
The command neteye tenant helps you to manage the configuration of the NetEye Tenants.
neteye tenant config create
¶
The neteye tenant config create command configures a new NetEye Tenant. The command takes care of creating the actual configuration of the Tenant, configures the services dedicated to the Tenant and, if is issued on a NetEye Cluster, takes care of synchronizing the Tenant configuration on all the Cluster Nodes. The command can also be run on on a Single Node.
Usage:
neteye# neteye tenant config create TENANT_NAME [OPTIONS]
Note that the name of the Tenant must respect the following constraints:
Must match the following regex
/^[a-zA-Z0-9_]{1,32}$/
, i.e. it must contain only alphanumeric characters and underscores and must contain between 1 and 32 charactersCan not contain the
icinga2
string
However, any type of strings or characters is allowed as an input for a “Tenant Name” if made using
--display-name
option.
Options:
Option |
Description |
---|---|
--display-name |
(Mandatory) a more user-friendly name of the Tenant used for visualization purposes: It must be unique and can contain spaces and special characters. |
--enable-module |
(Optional) allows users to enable NetEye Feature Module at tenant level
starting from the set of installed Feature Modules on the NetEye Master. The enable option accepts multiple values, for example:
|
--influxdb-node |
(Optional) the hostname of the InfluxDB-only node, as defined in the |
--alyvix-metrics-retention |
(Optional) the retention, in days, of the Alyvix Test Cases performance metrics stored in the InfluxDB database dedicated to the Tenant |
--custom-override-grafana-org |
(Optional) the custom Grafana organization associated with the Tenant. By
default it matches the |
--custom-override-glpi-entity |
(Optional) the custom GLPI entity associated with the Tenant. By default it
is |
--force |
(Optional) option to force the command to overwrite the existing configuration of the Tenant. This option will completely overwrite the existing configuration, please refer to neteye tenant config modify if you need to partially modify the configuration of an existing Tenant. |
neteye tenant config apply
¶
The neteye tenant config apply command sets up the services dedicated to the Tenant passed as argument. This command is used internally by neteye tenant config create, so NetEye users do not need to run this command explicitly.
Usage:
neteye# neteye tenant config apply TENANT_NAME
Options:
Option |
Description |
---|---|
--all |
(Optional) Apply the configuration of all the configured NetEye Tenants. This option is mutually exclusive with TENANT_NAME. |
neteye tenant config modify
¶
The neteye tenant config modify command allows you to modify the configuration of an existing NetEye Tenant. The command can also be run on on a Single Node.
Usage:
neteye# neteye tenant config modify TENANT_NAME [OPTIONS]
Options:
Option |
Description |
---|---|
--influxdb-node |
(Optional) |
--enable-module |
(Optional) |
--alyvix-metrics-retention |
(Optional) |
--custom-override-grafana-org |
(Optional) |
--custom-override-glpi-entity |
(Optional) |
Please refer to neteye tenant config create for a detailed description of the available options.
Certain values can only be set once. If they are already set, they have to be overwritten either manually or with neteye tenant config create --force. This, however, does NOT clean up the old configuration or migrate any data. Right now the fields that can be set only once are: display-name, influxdb-node, custom-override-grafana-org and custom-override-glpi-entity. For more information, please refer to the official channels: sales, consultants, or support portal.
neteye dpo setup
¶
The neteye dpo setup command sets up, directly from NetEye, the DPO machine to run the El Proxy verification.
The setup is based on the configuration which is to be specified in /etc/neteye-dpo
, as described in How to Setup the Automatic Verification of Blockchains.
The configuration file in JSON format contains the following attributes:
dpo_host: the IP address or hostname of the DPO machine you would like to configure
dpo_user: the user performing the SSH connection to the DPO machine
elastic_blockchain_proxy_d_path (Optional): the path to a folder containing additional toml configurations files, as described in El Proxy Configuration
retention_policy_d_path (Optional): the path to a folder containing customised retention policies, as described in Adding Custom Retentions to El Proxy
es_ca (Optional): the path to Elasticsearch root CA, needed for the connection when verifying the blockchain. If not specified, the default CA file
/neteye/local/elasticsearch/conf/certs/root-ca.crt
will be used.blockchains_verification: an array containing a JSON object for each verification that you would like to configure. Each verification object in its turn contains the following attributes:
tenant: the Tenant of the blockchain you would like to verify
retention: the retention of the blockchain you would like to verify
tag: the tag of the blockchain you would like to verify
webhook_host: the FQDN of the host where your Tornado Webhook Collector is running
webhook_token: the secret token chosen for the Tornado Webhook Collector
logs_file_size_limit_in_megabytes: This is the maximum size that the DPO logs and reports can occupy. The most recent log and report will be preserved regardless of their size.
cron_scheduling: a JSON object that specifies the scheduling of the verification. For more information about the values each property can assume, you can consult this online guide
minute: minute of the day on which the verification should take place
hour: hour of the day on which the verification should take place
day: day of the month on which the verification should take place
month: month on which the verification should take place
week_day: day of the week on which the verification should take place
es_client_cert_path (Optional): path of the client certificate used to connect with Elasticsearch for the verification. If not specified, the default path
/neteye/local/elasticsearch/conf/certs/neteye_ebp_verify_<tenant>.crt.pem
will be usedes_client_key_path (Optional): path of the client private key used to connect with Elasticsearch for the verification. If not specified, the default path
/neteye/local/elasticsearch/conf/certs/private/neteye_ebp_verify_<tenant>.key.pem
will be usedweb_server_ca (Optional): path to the certificate used by the webserver hosting the Tornado Webhook Collector
additional_parameters (Optional): a list of additional parameters, as strings, that will be passed to El Proxy verify command
To check out an example of a configuration in /etc/neteye-dpo
file please consult Step 1. Configure the blockchain verification.
Moreover, the command is also used to update/upgrade the verification containers image on the DPO machine after a NetEye update/upgrade, which will then cause the restart of the previously configured containers.
neteye alyvix-node setup
¶
The neteye alyvix-node setup command sets up the specified Alyvix node connection to NetEye. Additionally, the --all flag can be used to configure all available Alyvix nodes. Moreover, please note that regardless of any options included, this command must be executed from the NetEye Master.
Usage:
neteye# neteye alyvix-node setup [ALYVIX_NODE_HOSTNAME | --all]
Options:
Option |
Description |
---|---|
--all |
(Optional) Execute the setup for all the Alyvix nodes listed in the Director. This option is mutually exclusive with ALYVIX_NODE_HOSTNAME |
neteye cluster install
¶
The neteye cluster install creates a basic Corosync/Pacemaker
cluster with a floating ip starting from the configuration file located at
/etc/neteye-cluster
. If --force is specified, the command
tries to create a new cluster destroying any potential existing cluster.
This command automatically updates on each node the hacluster
user’s password, used for authenticating the nodes to the cluster. In case
/root/.pwd_hacluster
does not contain a password, a newly created
password will be saved in the file.
Usage:
neteye# neteye cluster install [-y] [--force]
Options:
Option |
Description |
---|---|
-y |
(Optional) Interactive-less installation |
--force |
(Optional) Force the creation of the cluster (there will be no way to recover your data lost as a result of this action) |
Supporting Scripts¶
Two scripts complement the abilities of the neteye update and neteye upgrade commands:
For more details, refer to the next two sections.
neteye_secure_install
¶
neteye_secure_install
is a wrapper around a number of scripts that
take care of the initial configuration of a NetEye installation and
then start all services that are required for NetEye to operate
correctly.
While this command is the first to be executed after the NetEye’ initial configuration, it must never be used in the update and upgrade procedures, because it is called automatically by the neteye update and neteye upgrade commands
In a nutshell, the tasks carried out by the script are:
To register the machine to RHEL 8
To set up Red Hat Insights
To reconfigure NetEye services and/or migrate configurations and databases after important changes
To restart services that were stopped or modified
To create certificates for secure communication
Before making any changes, the secure install script will also run a subset of light and deep health checks to ensure that NetEye will not be adversely affected due to a transient problem like low disk space or custom configurations.
Note
This automatic set of check is not intended to replace the good practice of running a separate, manual deep health check both before and after an update or upgrade.
The neteye_secure_install script is automatically called by the neteye update and neteye upgrade commands right after the installation of any new RPM packages from NetEye repositories
To run it manually, just type in the name of the script in a shell as root:
# neteye_secure_install
neteye_finalize_installation
¶
neteye_finalize_installation
is the last command executed during
an upgrade procedure and makes sure that the correct NetEye version is
stored. It is the last task of neteye upgrade.
Note
This command should never be used in the update and upgrade procedures, as it is called automatically by the neteye update and neteye upgrade commands. In case you need to launch it manually, follow the steps described below.
Complete the upgrade process by launching the following script:
# neteye_finalize_installation
Note
You should launch the finalize command only if you want to perform the upgrade manually and only if all previous steps have been completed successfully. If you encounter any errors or problems during the upgrade process, please contact our our service and support team to evaluate the best way forward for upgrading your NetEye system.
The NetEye Active Node¶
During the update and upgrade operations, it is mandatory that one
of the operative nodes is always active during the procedures. The
nodes of a cluster are listed in the /etc/neteye-cluster
file,
for example like the following.
{
"Hostname" : "my-neteye-cluster.example.com",
"Nodes" : [
{
"addr" : "192.168.47.1",
"hostname" : "my-neteye-01",
"hostname_ext" : "my-neteye-01.example.com",
"id" : 1
},
{
"addr" : "192.168.47.2",
"hostname" : "my-neteye-02",
"hostname_ext" : "my-neteye-02.example.com",
"id" : 2
},
{
"addr" : "192.168.47.3",
"hostname" : "my-neteye-03",
"hostname_ext" : "my-neteye-03.example.com",
"id" : 3
},
{
"addr" : "192.168.47.4",
"hostname" : "my-neteye-04",
"hostname_ext" : "my-neteye-04.example.com",
"id" : 4
}
],
"ElasticOnlyNodes": [
{
"addr" : "192.168.47.5",
"hostname" : "my-neteye-05",
"hostname_ext" : "my-neteye-05.example.com",
"id" : 5
}
],
"VotingOnlyNode" : {
"addr" : "192.168.47.6",
"hostname" : "my-neteye-06",
"hostname_ext" : "my-neteye-06.example.com",
"id" : 6
},
"InfluxDBOnlyNodes": [
{
"addr" : "192.168.47.7",
"hostname" : "my-neteye-07",
"hostname_ext" : "my-neteye-07.example.com"
}
]
}
The NetEye Active Node will always be the first node appearing in
the list of Nodes, in this case it is the node with FQDN
my-neteye-01.example.com
and it is the one that must be always
active during the update/upgrade procedure.
Therefore, before running neteye update and
neteye upgrade, log in to my-neteye-01.example.com
and
make sure that it is not in stand-by mode. To do so, first execute
the command to check the status of the cluster
cluster# pcs status
Then, if my-neteye-01.example.com
is in standby, make it active
with command
cluster# pcs node unstandby my-neteye-01.example.com
See also
How nodes are managed by the NetEye update/upgrade commands is described with great details in a NetEye blog post: https://www.neteye-blog.com/2021/10/hosts-and-neteye-upgrade/