Basic Concepts & Usage¶
The neteye CLI command is used from the CLI to carry out a few tasks related to the NetEye Installations, both Single Nodes and Cluster. Various sub-commands are available and are analysed in this section.
All output from each command execution is saved in a log file at /neteye/local/os/log/neteye_command/
.
A retention policy is applied to these files on a daily basis as follows:
Files are compressed
In case the size of the logs exceeds the configured maximum, the oldest files are deleted. This value is by default 500MB.
Files older than 2 years are deleted
You can configure the retention policy in /neteye/local/os/conf/logscleaner.d/neteye_command_logs.toml
neteye install
¶
neteye install
is a wrapper around a number of scripts that take care of
the initial configuration of a NetEye installation and then start all services
that are required for NetEye to operate correctly. Please note that this
command must never be used in the update and upgrade procedures.
If ran on a cluster, it will automatically install NetEye on every node defined
in /etc/neteye-cluster
and must be called only once from any cluster node.
Note
In cluster environments, services are configured in parallel without requiring the cluster nodes to be in standby mode. This approach involves setting up services on all nodes simultaneously, rather than waiting for the configuration process to finish on the initial node.
Before making any changes, the secure install script will also run a subset of light and deep health checks to ensure that NetEye will not be adversely affected due to a transient problem like low disk space or custom configurations.
Note
This automatic set of check is not intended to replace the good practice of running a separate, manual deep health check both before and after an update or upgrade.
To run it manually, just type in the name of the script in a shell as root:
# neteye install
If you want to learn more about this command, please refer to the advanced neteye install section where you can find more details about the underlying processes and execution steps.
neteye status
¶
neteye status is used to list the NetEye services and their status, either UP or DOWN.
neteye start | stop
¶
The neteye start and neteye stop commands are used to start or stop all NetEye services at once.
neteye update
¶
The neteye update command is intended to update your NetEye installation to the latest version available for your current release to get the latest bug fixes or security patches. To learn more about the update process, or if you want to successfully carry out the update, please refer to the advanced neteye update and Update Procedure sections respectively.
neteye upgrade
¶
The objective of the neteye upgrade command is to bring your NetEye installation to the next major version. The neteye upgrade requires the latest updates of the current NetEye version to be installed: if some updates are available the command will stop with an error message. The upgrade process is a complex operation and may take a long time to complete.
If the neteye upgrade command is successful, a message will inform you that the upgrade procedure concludes successfully. Otherwise, if the commands breaks at some point, you need to fix the failed tasks manually and then launch again the command. Check also the Troubleshooting section for more information and directions about fixing the problems. To learn more about the upgrade process, or if you want to successfully carry out the upgrade for single or cluster environments, please refer to the advanced neteye upgrade, Single Node Upgrade from 4.39 to 4.40 and Cluster Upgrade from 4.39 to 4.40 sections respectively.
neteye config
¶
The neteye config command lets you interact with different fundamental parts of the neteye configuration.
neteye config cluster sync
¶
The subcommand sync allows you to copy the cluster config files (
/etc/neteye-cluster
and /etc/neteye-satellite.d/*
) from the
current node to all other nodes in the cluster to make sure all files are in
sync.
neteye config auth idp
¶
neteye config auth idp list
¶
Lists all the configured identity providers with their configurable properties.
neteye config auth idp set
¶
This command allows you to overwrite certain fields of the identity providers. The identity provider instance is adressable by its alias as a positional argument.
Usage:
neteye# neteye config auth idp set ALIAS [OPTIONS]
Where ALIAS is the alias of the identity provider you want to modify. It can be retrieved by running the neteye config auth idp list command.
Options:
Option |
Description |
---|---|
--domains |
A list of comma-separated domains. Will overwrite the current domains. |
--force |
Force the overwriting of the changes without a confirmation prompt. |
neteye node
¶
The command neteye node is responsible for performing operations on the node on which it is executed such as updating the operating system.
neteye node system-upgrade
¶
The command neteye node system-upgrade executed on a specific node is responsible for the upgrade of NetEye from version 4.22 to 4.23 and also of the upgrade of the operating system from CentOS 7 to RHEL 8.
As described in the upgrade procedure it will upgrade the operating system from NetEye 4.22 on CentOS 7 to RHEL 7 and then to RHEL 8 with NetEye 4.23. After each change of operating system, a reboot is required. In the case of a cluster, the command must be executed node by node and before starting with a new node, the previous one must have finished the upgrade to RHEL 8.
This command does not carry out any task on versions 4.23 onwards.
neteye node register
¶
The command neteye node register registers the RHEL 8 subscription and sets up
Red Hat Insights. By default it uses the NetEye activation key. If the node is
registered with another organization, this behaviour can be changed by modifying
/neteye/local/os/conf/subscription-manager.toml
.
[subscription-manager]
enable_auto_rhel_subscription = false
If the setting enable_auto_rhel_subscription is set to false the Red Hat registration and the Red Hat Insights subscription are skipped.
This command will also be called during neteye install if it is not disabled.
neteye node reboot
¶
The command neteye node reboot helps to safely reboot the node. The command is to be run in case a node reboot is required during the upgrade/update procedure. It performs the reboot only if the target node is in standby and the deep health checks were successfully passed.
neteye feature-module
¶
The neteye feature-module command regards any enabled feature module in the NetEye system. This command contains all the dedicated subcommands applicable to the installed feature modules.
neteye feature-module neteye-siem
¶
The neteye feature-module neteye-siem command regards the SIEM feature module and is available only if the additional component is installed in the NetEye system.
neteye feature-module neteye-siem elastic-stack-subscription set
¶
The neteye feature-module neteye-siem elastic-stack-subscription set command enables you to switch between Platinum and Enterprise Elastic Stack subscription levels based on your needs. For example to activate the Enterprise subscription of Elastic Stack you can execute:
neteye feature-module neteye-siem \
elastic-stack-subscription set enterprise
Note
Changing your subscription will affect the licensing plan of your installation. It is your responsibility to ensure you have the appropriate licensing in place before activating a new subscription. Please contact the NetEye support or the Sales office for more information.
The same command can be used for downgrading to a “Platinum” subscription.
Since certain functionalities might not be supported with the lower subscription level,
in this case it will be necessary to use the --force
flag to acknowledge potential breaking
changes that could arise from downgrading the subscription.
neteye tenant
¶
The command neteye tenant helps you to manage the configuration of the NetEye Tenants.
neteye tenant config create
¶
The neteye tenant config create command configures a new NetEye Tenant. The command takes care of creating the actual configuration of the Tenant, configures the services dedicated to the Tenant and, if is issued on a NetEye Cluster, takes care of synchronizing the Tenant configuration on all the Cluster Nodes. The command can also be run on on a Single Node.
Usage:
neteye# neteye tenant config create TENANT_NAME [OPTIONS]
Note that the name of the Tenant must respect the following constraints:
Must match the following regex
/^[a-zA-Z0-9_]{1,32}$/
, i.e. it must contain only alphanumeric characters and underscores and must contain between 1 and 32 charactersCan not contain the
icinga2
string
However, any type of strings or characters is allowed as an input for a “Tenant Name” if made using
--display-name
option.
Options:
Option |
Description |
---|---|
--display-name |
(Mandatory) a more user-friendly name of the Tenant used for visualization purposes: It must be unique and can contain spaces and special characters. |
--enable-module |
(Optional) allows users to enable NetEye Feature Module at tenant level
starting from the set of installed Feature Modules on the NetEye Master. The enable option accepts multiple values, for example:
|
--influxdb-node |
(Optional) the hostname of the InfluxDB-only node, as defined in the |
--alyvix-metrics-retention |
(Optional) the retention, in days, of the Alyvix Test Cases performance metrics stored in the InfluxDB database dedicated to the Tenant |
--custom-override-grafana-org |
(Optional) the custom Grafana organization associated with the Tenant. By
default it matches the |
--custom-override-glpi-entity |
(Optional) the custom GLPI entity associated with the Tenant. By default it
is |
--force |
(Optional) option to force the command to overwrite the existing configuration of the Tenant. This option will completely overwrite the existing configuration, please refer to neteye tenant config modify if you need to partially modify the configuration of an existing Tenant. |
neteye tenant config apply
¶
The neteye tenant config apply command sets up the services dedicated to the Tenant passed as argument. This command is used internally by neteye tenant config create, so NetEye users do not need to run this command explicitly.
Usage:
neteye# neteye tenant config apply TENANT_NAME
Options:
Option |
Description |
---|---|
--all |
(Optional) Apply the configuration of all the configured NetEye Tenants. This option is mutually exclusive with TENANT_NAME. |
neteye tenant config modify
¶
The neteye tenant config modify command allows you to modify the configuration of an existing NetEye Tenant. The command can also be run on on a Single Node.
Usage:
neteye# neteye tenant config modify TENANT_NAME [OPTIONS]
Options:
Option |
Description |
---|---|
--influxdb-node |
(Optional) |
--enable-module |
(Optional) |
--alyvix-metrics-retention |
(Optional) |
--custom-override-grafana-org |
(Optional) |
--custom-override-glpi-entity |
(Optional) |
Please refer to neteye tenant config create for a detailed description of the available options.
Certain values can only be set once. If they are already set, they have to be overwritten either manually or with neteye tenant config create --force. This, however, does NOT clean up the old configuration or migrate any data. Right now the fields that can be set only once are: display-name, influxdb-node, custom-override-grafana-org and custom-override-glpi-entity. For more information, please refer to the official channels: sales, consultants, or support portal.
neteye dpo setup
¶
The neteye dpo setup command sets up, directly from NetEye, the DPO machine to run the El Proxy verification.
The setup is based on the configuration which is to be specified in /etc/neteye-dpo
, as described in How to Setup the Automatic Verification of Blockchains.
The configuration file in JSON format contains the following attributes:
dpo_host: the IP address or hostname of the DPO machine you would like to configure
dpo_user: the user performing the SSH connection to the DPO machine
elastic_blockchain_proxy_d_path (Optional): the path to a folder containing additional toml configurations files, as described in El Proxy Configuration
retention_policy_d_path (Optional): the path to a folder containing customised retention policies, as described in Adding Custom Retentions to El Proxy
es_ca (Optional): the path to Elasticsearch root CA, needed for the connection when verifying the blockchain. If not specified, the default CA file
/neteye/local/elasticsearch/conf/certs/root-ca.crt
will be used.blockchains_verification: an array containing a JSON object for each verification that you would like to configure. Each verification object in its turn contains the following attributes:
tenant: the Tenant of the blockchain you would like to verify
retention: the retention of the blockchain you would like to verify
tag: the tag of the blockchain you would like to verify
webhook_host: the FQDN of the host where your Tornado Webhook Collector is running
webhook_token: the secret token chosen for the Tornado Webhook Collector
logs_file_size_limit_in_megabytes: This is the maximum size that the DPO logs and reports can occupy. The most recent log and report will be preserved regardless of their size.
cron_scheduling: a JSON object that specifies the scheduling of the verification. For more information about the values each property can assume, you can consult this online guide
minute: minute of the day on which the verification should take place
hour: hour of the day on which the verification should take place
day: day of the month on which the verification should take place
month: month on which the verification should take place
week_day: day of the week on which the verification should take place
es_client_cert_path (Optional): path of the client certificate used to connect with Elasticsearch for the verification. If not specified, the default path
/neteye/local/elasticsearch/conf/certs/neteye_ebp_verify_<tenant>.crt.pem
will be usedes_client_key_path (Optional): path of the client private key used to connect with Elasticsearch for the verification. If not specified, the default path
/neteye/local/elasticsearch/conf/certs/private/neteye_ebp_verify_<tenant>.key.pem
will be usedweb_server_ca (Optional): path to the certificate used by the webserver hosting the Tornado Webhook Collector
additional_parameters (Optional): a list of additional parameters, as strings, that will be passed to El Proxy verify command
To check out an example of a configuration in /etc/neteye-dpo
file please consult Step 1. Configure the blockchain verification.
Moreover, the command is also used to update/upgrade the verification containers image on the DPO machine after a NetEye update/upgrade, which will then cause the restart of the previously configured containers.
The final step of this procedure involves verifying any inconsistencies between the current state of
/etc/neteye-dpo
and the DPO machine. This process is intended to remove any containers (along with the
corresponding blockchain keys and certificates) present in the DPO machine but not documented in your configuration
file. This process is also designed to facilitate the removal of any blockchain verification. Before each removal,
you will be prompted to confirm the action.
neteye alyvix-node
¶
neteye alyvix-node setup
¶
The neteye alyvix-node setup command sets up the specified Alyvix node connection to NetEye. Additionally, the --all flag can be used to configure all available Alyvix nodes. Moreover, please note that regardless of any options included, this command must be executed from the NetEye Master.
Usage:
neteye# neteye alyvix-node setup [ALYVIX_NODE_HOSTNAME | --all]
Options:
Option |
Description |
---|---|
--all |
(Optional) Execute the setup for all the Alyvix nodes listed in the Director. This option is mutually exclusive with ALYVIX_NODE_HOSTNAME |
neteye cluster install
¶
The neteye cluster install creates a basic Corosync/Pacemaker
cluster with a floating ip starting from the configuration file located at
/etc/neteye-cluster
. If --force is specified, the command
tries to create a new cluster destroying any potential existing cluster.
This command automatically updates on each node the hacluster
user’s password, used for authenticating the nodes to the cluster. In case
/root/.pwd_hacluster
does not contain a password, a newly created
password will be saved in the file.
Usage:
neteye# neteye cluster install [-y] [--force]
Options:
Option |
Description |
---|---|
-y |
(Optional) Interactive-less installation |
--force |
(Optional) Force the creation of the cluster (there will be no way to recover your data lost as a result of this action) |
Supporting Scripts¶
Two scripts complement the abilities of the neteye update and neteye upgrade commands:
For more details, refer to the next two sections.
neteye_secure_install
¶
This utility script has been deprecated starting from NetEye 4.36 and you must now use neteye install instead.
The neteye_secure_install will still be available only for internal procedures such as the neteye update and neteye upgrade commands.
neteye_finalize_installation
¶
neteye_finalize_installation
is the last command executed during
an upgrade procedure and makes sure that the correct NetEye version is
stored. It is the last task of neteye upgrade.
Note
This command should never be used in the update and upgrade procedures, as it is called automatically by the neteye update and neteye upgrade commands. In case you need to launch it manually, follow the steps described below.
Complete the upgrade process by launching the following script:
# neteye_finalize_installation
Note
You should launch the finalize command only if you want to perform the upgrade manually and only if all previous steps have been completed successfully. If you encounter any errors or problems during the upgrade process, please contact our our service and support team to evaluate the best way forward for upgrading your NetEye system.