User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon IT Operation Analytics - Telemetry
The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph
Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Single Node Cluster Node Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring 3b Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis 3b Audit Log 3b Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes Test Cases Dashboard Use Cases Overview Architecture Authorization Elasticsearch Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Elastic APM Elastic RUM Log Manager - Deprecated Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction Getting Started SATAYO Items Settings Managed Service Mitre Attack Coverage Changelog Before you start Update Procedure Single Node Upgrade from 4.41 to 4.42 Cluster Upgrade from 4.41 to 4.42 Satellite Upgrade from 4.41 to 4.42 DPO machine Upgrade from 4.41 to 4.42 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary 3b

Telegraf Configuration

A default configuration file for a Telegraf agent can be generated by executing:

telegraf config > <telegraf_configuration_directory>/${INSTANCE}.conf

The configuration file <telegraf_configuration_directory>/${INSTANCE}.conf contains a complete list of configuration options. InfluxDB as output and cpu, disk, diskio, kernel, mem, processes, and system as inputs are enabled by default. Before starting the Telegraf agent, edit the initial configuration to specify your inputs (where the metrics come from) and outputs (where the metrics go). Please refer to the official documentation on how to configure Telegraf for your specific use case.

Note

Please note that the configuration path may change with your specific installation version and operating system. In NetEye <telegraf_configuration_directory> is located in /neteye/local/telegraf/conf/.

Warning

Files under path /neteye/local/telegraf/conf/neteye_* are NetEye configuration and must not be modified by the user. If you need to add configurations to these Telegraf instances, you can add them in their respective .d drop-in folders. For example, to add configurations to /neteye/local/telegraf/conf/neteye_consumer_influxdb_telegraf_master.conf you can add a file inside /neteye/local/telegraf/conf/neteye_consumer_influxdb_telegraf_master.d/.

Running a Local Telegraf Instance in NetEye

To run a Telegraf instance in NetEye, the user must create a dedicated configuration file i.e., $INSTANCE.conf in directory /neteye/local/telegraf/conf/ as already described in the Telegraf Configuration section and start the service using the command below

neteye# systemctl start telegraf-local@${INSTANCE}

The telegraf-local service will load the configuration file named $INSTANCE.conf e.g. telegraf-local@test.service will look for the configuration file /neteye/local/telegraf/conf/test.conf.

Note

Please note that all installations of NetEye use service telegraf-local instead of a standard telegraf, which is enhanced with NetEye-specific functions that guarantee flawless interaction with Telegraf. Moreover, on a NetEye Master, a service based on telegraf-local takes care of collecting its default metrics.

Telegraf logs are collected by the journald agent and can be viewed by using journalctl. For example, to inspect the log of the Telegraf instance called telegraf-local@${INSTANCE} the user can type:

neteye# journalctl -u telegraf-local@${INSTANCE} -f

However, the Telegraf instance can be configured to write the logs to a specific file. This can be set in the configuration file as follows:

logfile = "/neteye/local/telegraf/log/${INSTANCE}.log"

Debugging output can be enabled by setting the debug flag to true in the configuration file:

debug = true

Note

This is a local service, not a Clusterized service hence it runs only on the node you started it.

InfluxDB-only Nodes

Warning

The support of InfluxDB-only nodes is currently experimental. They are designed to offload traffic from clusters that process an extraordinarily high volume of metrics. If you think that this could be beneficial for you, please get in touch with your consultant. The feature will become publicly available in the upcoming releases.

Starting from NetEye 4.27, it is possible to use, in cluster installations, an InfluxDB-only node as target instance for the data collected by the Telegraf consumer of a NetEye Tenant.

To use an InfluxDB-only node, you have to create an entry of type InfluxDBOnlyNodes in the file /etc/neteye-cluster, as in the following example:

  {
  "Hostname" : "my-neteye-cluster.example.com",
  "Nodes" : [
     {
        "addr" : "192.168.1.1",
        "hostname" : "my-neteye-01",
        "hostname_ext" : "my-neteye-01.example.com",
        "id" : 1
     },
     {
        "addr" : "192.168.1.2",
        "hostname" : "my-neteye-02",
        "hostname_ext" : "my-neteye-02.example.com",
        "id" : 2
     },
     {
        "addr" : "192.168.1.3",
        "hostname" : "my-neteye-03",
        "hostname_ext" : "my-neteye-03.example.com",
        "id" : 3
     },
     {
        "addr" : "192.168.1.4",
        "hostname" : "my-neteye-04",
        "hostname_ext" : "my-neteye-04.example.com",
        "id" : 4
     }
  ],
  "ElasticOnlyNodes": [
     {
        "addr" : "192.168.1.5",
        "hostname" : "my-neteye-05",
        "hostname_ext" : "my-neteye-05.example.com",
        "id" : 5
     }
  ],
  "VotingOnlyNode" : {
       "addr" : "192.168.1.6",
       "hostname" : "my-neteye-06",
       "hostname_ext" : "my-neteye-06.example.com",
       "id" : 6
  },
   "InfluxDBOnlyNodes": [
      {
         "addr" : "192.168.1.7",
         "hostname" : "my-neteye-07",
         "hostname_ext" : "my-neteye-07.example.com"
      }
   ]
}

By default, the parameters used to connect to the InfluxDB instance are the following:

  • Port: 8086

  • InfluxDB administrator user: root

In case you need to use different parameters because, for example, the name of the InfluxDB administrator user is admin instead of root, it is possible to specify them in the node definition, as follows:

{
  "InfluxDBOnlyNodes":
    [
      {
        "addr" : "192.168.1.7",
        "hostname" : "my-neteye-07",
        "hostname_ext" : "my-neteye-07.example.com",
        "influxdb_connection": {
          "port": 8085,
          "admin_username": "admin"
        }
      }
    ]
}

After the node has been added to the cluster configuration, please ensure this is synchronized to all Cluster Nodes by executing the neteye config cluster sync command.

Subsequently, you need to add the password of the InfluxDB administrator user in the file /root/.pwd_influxdb_username_hostname and adjust its permissions to ensure it is readable only by the root system user. For example, if the default root InfluxDB user is used to connect to the external InfluxDB instance defined above, the commands to be executed will be the following:

cluster-node-1# echo "password" > /root/.pwd_influxdb_root_neteye04.neteyelocal
cluster-node-1# chmod 640 /root/.pwd_influxdb_root_neteye04.neteyelocal
cluster-node-1# chown root:root /root/.pwd_influxdb_root_neteye04.neteyelocal

Afterwards, it is possible to create a NetEye Tenant that uses the added node as InfluxDB target, by following the Configuration of Tenants procedure.

Write Data to InfluxDB

Starting with NetEye 4.19, InfluxDB is protected with username and password authentication. Hence, to send data to InfluxDB you must create a dedicated user with limited privileges to be used in Telegraf configuration. For example, to create a write-only user on database icinga2, you can do like this

CREATE USER "myuser" WITH PASSWORD 'securepassword'
GRANT WRITE ON "icinga2" TO "myuser"

Hint

InfluxDB default administrator username is root and the password can be found in the file /root/.pwd_influxdb_root.

To write data to InfluxDB you must configure the dedicate output section in Telegraf configuration to use SSL connection and Basic Authentication:

[[outputs.influxdb]]
  urls = ["https://influxdb.neteyelocal:8086"]

  username = "myuser"
  password = "securepassword"

Custom Retention Policies

By default, the Retention Policy of InfluxDB is set to 550 days for all Telegraf databases, which includes telegraf_master as well as a database for each tenant, following the procedure and naming conventions in Multi Tenancy configuration explained. You can change it under Configuration / Modules / analytics / Configuration. To apply the new settings run neteye install.

Warning

Keep in mind that modifying the duration of retention policy will retroactively affect shards, which means it will delete data older than the duration you are setting.

The Telegraf Agent can define its own Retention Policy. This may be useful when for example the Telegraf Agent is producing some metrics that either occupy a lot of size on disk, or need to be kept for a long time due to their importance.

Suppose that you want some Telegraf metrics to be stored on InfluxDB in the database my_tenant with the Retention Policy named six_months. In this case you should proceed as follows:

  1. Ensure that the six_months Retention Policy is present in InfluxDB for the database my_tenant

  2. Identify the Telegraf Agent(s) that is producing the metrics and login to the host where this Agent is running

  3. Modify the configuration of the Telegraf Agent(s) by adding the tag retention_policy with value six_months to the metrics. For example you can add the tag in the global_tags of the Telegraf Agent:

    [global_tags]
      retention_policy = "six_months"
    
  4. Restart the Telegraf Agent

Once this procedure has been performed, the metrics gathered from the Telegraf Agent will be automatically written with the InfluxDB Retention Policy specified by the Agent’s retention_policy tag. Indeed, this procedure works because the Telegraf consumers that run on the NetEye Master are pre-configured by NetEye in such a way that they write in InfluxDB using the Retention Policy they receive inside the retention_policy tag.