User Guide


Before you start

As a part of the NetEye Core, Tornado module does not need additional installation. However, to start running passive monitoring of your infrastructure with Tornado software, some basic actions, referred to as Primary configuration, are to be carried out.

Tuning your infrastructure

Since passive monitoring process does not imply agent installation on the systems and devices to be monitored, the latter should be configured to send a particular type of data to NetEye. Tuning your own system to send events should be done taking into account its properties, architecture and setup.

Tornado Collectors are preconfigured out of the box. However, for some of them additional steps are required to be made in order to let them receive events sent by your system or device, e.g. in case of sending SMS events to the Tornado SMS Collector the Modem and the smstools are to be additionally configured.

For more details on each particular type of Collectors please see Tornado Collectors.

Multitenancy Roles Configuration

If your NetEye installation is tenant aware, roles associated to each user must be configured to limit their access only to the Tornado configuration they are allowed to work with.

In the NetEye roles (Configuration / Access Control / Roles), add or edit the role related to the tenant limited users. In the detail of the role configuration you can find the tornado module section. You can set the tenant ID in the tornado/tenant_id restriction.


You can find the list of available Tenant IDs by reading the directory names in /etc/neteye-satellites.d/. You can use this command:

neteye# basename -a $(ls -d  /etc/neteye-satellite.d/*/)

Tenant-based Configuration

Tornado configuration is tenant-specifc. For single-tenant environments the configuration you will create will apply to the Master Tenant.

In case you would like to go with a multi-tenant environment, i.e. sending data to the NetEye from multiple tenants, you should create a tenant with neteye tenant config create if this has not been done yet. Consult The NetEye Command for more information on how to create a new tenant.

Please note that due to Implicit Lock mode only one user at a time from the pool of users belonging to all tenants within your NetEye installation can modify the configuration.

Retrieving Payload of an Event

Before you start creating your Tornado configuration, the payload of each received event is to be extracted. This can be done by following the next steps with the Processing tree:

  • Create a Filter which matches all incoming events of a chosen type (let’s use SNMP Traps as an example):

  "type": "AND",
        "type": "equals",
        "first": "${event.type}",
        "second": "snmptrapd"
  • Create a Rule for the ‘snmptrap’ filter by clicking on ‘Add rule’. The archive_all rule would then write all incoming traps in JSON format to a log file, which can be defined in the /neteye/share/tornado/conf/archive_executor.toml.

    If nothing is defined, all logs are written to the /neteye/shared/tornado/data/archive/all/one_events.log.

  • Define the following action within a Rule:

    "id": "archive",
    "payload": {
      "archive_type": "snmptrapd",
      "event": "${event}",
      "hostname": "${event.payload.src_ip}"

Processing Tree

The Processing Tree presents all filters and rulesets within a tenant. The order of rules within a ruleset defines the sequence of their execution.

In order to continuously improve UX and usability of the Tornado Instance, NetEye provides a GUI based on the Carbon Design System’s best practices.


Fig. 153 Example Processing Tree.

Use drag & drop function focusing on a button to the left of the rule name to change the order.

For the start, select the tenant that you would like to work with in the toolbar. You can find more information on the relation between Tornado configuration and your tenants in Tenant-based Configuration.

More information on Processing Tree Configuration can be found in a dedicated section.

Edit Mode

Switch to Edit mode in order to modify Tornado configuration with the help of the Processing Tree.

When you start modifying the configuration, Tornado will continue to work with the existing configuration-thanks to the implicit lock mode, while the new changes are saved in a separate draft configuration. The new configuration then must be deployed to become operative.

Edit mode has other positive side effects: one does not need to complete the changes in one session, but can stop and then continue at a later point; another user can pick up the draft and complete it; in case of a disaster (like e.g., the abrupt end of the HTTPS connection to the GUI) it is possible to resume the draft from the point where it was left.

Implicit Lock Mode

Only one user at a time can modify the Processing Tree configuration. This prevents multiple users from changing the configuration simultaneously, which might lead to undesirable results and possibly to Tornado not working correctly due to incomplete or wrong configuration. When a user is editing the configuration, the actual, running configuration is left untouched: it continues to be operative and accepts incoming data to be processed.


Only one draft at a time is allowed; that is, editing of multiple draft is not supported!

Enable ‘Edit’ Mode

Before you start modifying Tornado configuration with the help of a Processing Tree, make sure that editing permission is granted by the NetEye role.

  1. In your NetEye installation, go to (Configuration / Access Control / Roles)

  2. Select an existing role or add a new one for configuring the permission

  3. Set the tornado/edit permission on the tornado Module section of the role details to ‘On’


Fig. 154 Tornado Module permissions

As a result, ‘Edit’ switcher would be available in the top right corner of the Tornado Configuration.

Add a Node

A Node should be added in order to create Tornado configuration:

  1. Switch to Edit mode in the top right corner of your layout:

    When switching to Edit mode, a new draft is created on the fly if none is present, which is an exact copy of the running Tornado configuration. If not present in the draft, a root node of type Filter will be automatically added.

  2. Click on the “Add” button in the top right corner and select the parent node to which you want to add a new node - a Filter or a Ruleset.

  3. Optionally, click on the icon with the three dots on each node that from now on will be called the overflow menu


    Fig. 155 Adding a node

    All nodes at the same level are ordered alphabetically.

  4. Define Filter node properties:

    • filter name: A unique string value should be only composed of letters, numbers and the “_” (underscore) character; it corresponds to the filename, stripped from its .json extension.

    • description

    • active: A boolean value; if false, the Filter’s children will be ignored.

    • filter: A boolean operator that, when applied to an event, returns true or false. This operator determines whether an Event matches the Filter; consequently, it determines whether an Event will be processed by the Filter’s inner nodes.

  5. Filter node is using the same set of Constraints in ‘WHERE’ tab as a Ruleset node. Based on your needs, a Filter node can be configured to process events of a particular type, from a particular device within your network, e.g.:

  "type": "AND",
        "type": "equals",
        "first": "${event.payload.src_ip}",
        "second": ""

If needed, you can delete a Node from the overflow menu when in Edit mode.

Filters available by default

The Tornado Processing Tree provides some out of the box Filters, which match all, and only, the Events originated by some given tenant. For more information on tenants in NetEye visit the dedicated page.

These Filters are created at the top level of the Processing Tree, in such a way that it is possible to set up tenant-specific Tornado pipelines.

Given for example a tenant named acme, the matching condition of the Filter for the acme tenant will be defined as:

    "type": "equals",
    "first": "${event.metadata.tenant_id}",
    "second": "acme"

Keep in mind that these Filters must never be deleted nor modified, because they will be automatically re-created.


NetEye generates one Filter for each tenant, including the default master tenant.

Test Processing Tree Draft

In order to check if the Processing Tree was configured in a proper way, i.e. filters’ structure is working as expected, you can send test events in the Test Events panel, where test events can be created by providing data in a dedicated form.


In order to check the correctness of a Draft without impacting the deployed configuration, make sure you’re running a test in Edit mode. If the Edit mode is set to OFF, the deployed configuration will be used for a test.

  1. Open Test Event panel by clicking on the lightning icon in the top right corner of the configuration screen

  2. Provide all the data required for a test event

    The event will be processed using the Draft and the result will be displayed, while keeping the existing configuration running.

  3. ‘Enable execution of actions’ option should be switched OFF.

    This stage you’re only checking the correctness of the draft, and not matching events to the rules within a ruleset.

  4. Execute a test by clicking “Run Test” button.

    The Event is sent to Tornado and the outcome of the operation will be reported in the Processing Tree. Following the yellow line it is possible to see the path that the event has taken. The nodes that have matched the event are distinguishable by a full yellow lightning bolt while those partially matched have an empty bolt.


Fig. 156 A Processing Tree with an event result

Create Rules

In order to process a received Event and trigger an Action, an ordered set of Rules is to be created inside a Ruleset node.

The order of the rules in a Ruleset defines the sequence of their execution. For changing the order, you can use drag&drop function focusing on a button to the left of the rule name.

In the case a Tornado Event matches a Rule based on its conditions, the Rule would trigger an Action, e.g. command, process or operation, depending on your needs.

  1. Select or add a Ruleset in a Processing Tree

    All the rules presented in a Processing Tree belong to a particular Ruleset. Thus, you will have to make sure you have selected the right child node, i.e. a Ruleset, based on the procedure defined in Add a Node section.

  2. By clicking on a Rule item or on the “Add rule” button, a form to add or edit a rule, respectively, opens. It is organized in tabs:

  1. Provide data for the basic Properties of a Rule:

  • rule name: A string value representing a unique rule identifier. It can be composed only of alphabetical characters, numbers and the “_” (underscore) character.

  • description

  • continue: A boolean value indicating whether to proceed with the event matching process if the current rule matches.

  • active: A boolean value; if false, the rule is ignored.

  1. Define conditions of a rule that would serve as matching principles for the events.

    The conditions are defined with the help of Constraints. There are two types of them:

  • WHERE: A set of operators that allows you to specify the condition where the rule should be matched; when applied to an event returns true or false

  • WITH: A set of regular expressions that extract values from an Event and associate them with named variables

    An event matches a rule only if the WHERE clause evaluates to true and all regular expressions in the WITH clause return non-empty values.

    Depending on the constrains used, you can add a rule to extract variables from the payload of an event. For this, an extractor should be added inside the WITH tab of a particular rule. All extracted variables are displayed in the Test Event panel.


Fig. 157 Sample of extracted variables

Please consult a Advanced Filters and Operators section to learn more about WHERE and WITH operators and how to use them.

  1. Specify the actions to be executed when an Event matches a Rule

    Please consult Tornado Actions for configuring Actions.

  2. After all the changes have been saved, test the Event to see if the filter structure and rules conditions were configured properly.

    Open test event panel and provide all neccessary event data, then Run test. At this point, a rule can be in one of the following states:

    • matched: If a rule matched the Event.

    • stopped: If a rule matched the Event and then stopped the execution flow. This happens if the continue flag of the rule is set to false.

    • partially matched: If where condition of the Rule was matched but it was not possible to process the required extracted variables.

    • not matched: If the rule did not match the Event.


Fig. 158 Example of processed rules

  • Matched rules: Extract_sender, Extract_subject, Archive_all

  • Partially matched: Extract_message

  • Not matched: Block_invalid_senders

For each rule in the table, the extracted variables and the generated Action payloads are shown. Two other buttons are visible, one for cleaning all the fields of the form and one for cleaning the outcome of the test.

  1. Deploy the configuration to production.

    In order to make your newly created or edited configuration operative, the draft is to be deployed. This can be with the help of a deploy button in the top right corner of the Processing Tree layout, next to ‘Save’ option.

    In the case of muliti-tenant environment, a deployed configuration is applied on a particular tenant it was created for. For a single-tenant environment, the configuration is applied to the Master Tenant only.

Configuration Example

In order to showcase how the functionality provided within the Processing tree can be used to configure Tornado, the following use-case will serve as a basis for a configuration example.

Understanding the Use Case

As a user I want to be able to passively monitor a host that sends SNMP trap with the status of its backup operations.

In order to perform this, we need to configure Tornado to receive SNMP events, make Tornado create the host if it does not exist, and set the status based on the result of the backup operation.

Prerequisites: The Edit mode is ON; the tenant is chosen in accordance with the application of the configuration.

Step 1. Create a Filter node to match SNMP Trap events

For the start a dedicated Filter node that would match all events of a particular type - SNMP Traps in this case - is to be created. For that, use the following operators:

  "type": "AND",
  "operators": [
      "type": "equals",
      "first": "${event.type}",
      "second": "snmptrapd"

You can also additionally create a Filter to match the events coming from a particular IP source, or create a dedicated rule for it in the following steps.

Step 2. Create a Ruleset node

Now, as the Processing tree is organized to catch SNMP Traps, optionally from a particular IP source, it is time to create a Ruleset and include a set of rules to process the events.

Name a Ruleset and procees to creating the first Rule winthin a Ruleset.

Step 3. Create a Rule to extract the variables required for defining an action in the following Rules.

In order to be able to define the action in a dedicated rule, it is first neccessary to extract the variables from the payoload of the event. Assuming the payload of an event is

    "dest_ip": "",
    "oids": {
      "DISMAN-EVENT-MIB::sysUpTimeInstance": {
        "content": "(1617618274) 187 days, 5:23:02.74",
        "datatype": "Timeticks"
      "SNMPv2-MIB::snmpTrapOID.0": {
        "content": "SNMPv2-SMI::enterprises.14604.",
        "datatype": "OID"
      "SNMPv2-SMI::enterprises.14604.": {
        "content": "default",
        "datatype": "STRING"
      "SNMPv2-SMI::enterprises.14604.": {
        "content": "3",
        "datatype": "INTEGER"
      "SNMPv2-SMI::enterprises.14604.": {
        "content": "Data Protection - Job Succeeded",
        "datatype": "STRING"
    "protocol": "UDP",
    "src_ip": "",
    "src_port": "58953"

in the ‘With’ clause create separate extractors for hostname, checkResult and DetectedCriteria:

   "HostName": {
       "from": "${event.payload.oids.\"SNMPv2-SMI::enterprises.14604.\".content}",
       "regex": {
         "type": "Regex",
         "match": ".*",
         "group_match_idx": 0,
         "all_matches": false
       "modifiers_post": [
           "type": "Lowercase"
    "checkResult": {
        "from": "\n${event.payload.oids.\"SNMPv2-SMI::enterprises.14604.\".content}",
        "regex": {
           "type": "Regex",
           "match": "(?m)^Detected Criteria: (.*\\w)\\s*$",
           "group_match_idx": 1,
           "all_matches": false
        "modifiers_post": []
    "DetectedCriteria": {
       "from": "\n${event.payload.oids.\"SNMPv2-SMI::enterprises.14604.\".content}",
       "regex": {
         "type": "Regex",
         "match": "(?m)^Detected Criteria: (.*\\w)\\s*$",
         "group_match_idx": 1,
         "all_matches": false
        "modifiers_post": [
          "type": "Map",
          "mapping": {
            "Job Succeeded": "0",
            "Job Succeeded with Errors": "1",
            "No Backup for last [0-9]+ Days": "2"
          "default_value": "def_val"

Step 4. Add a Rule to define Actions required to create a Host and assign the status based on the extracted extracted variables

Add a Rule within the same Ruleset and go to Actions tab. Choose SMART_MONITORING_CHECK_RESULT type of action from a dropdown with available options.

Define the following parameters:

    "id": "smart_monitoring_check_result",
    "payload": {
      "check_result": {
        "exit_status": "${_variables.cv_vars.DetectedCriteria}",
        "performance_data": "",
        "plugin_output": "${_variables.cv_vars.DetectedCriteria}"
      "host": {
        "imports": "example-status-dummy",
        "object_name": "${_variables.cv_vars.HostName}",
        "vars": {
          "created_by": "tornado"
      "service": {
        "imports": "example-passive",
        "object_name": "backup-status",
        "vars": {
          "created_by": "tornado"

Step 5. Test Event whether it matches the rules you created within a ruleset.

Save the configuration and use Test Event panel in order to make sure the event matches the rules within the ruleset.

If you prefer to check if the host/service is actually created in the Dashboard and the status is assigned as a result of the match, enable the action execution in the Test Event panel when running a test.

In case of a success, deploy the configuration to production.

As result, a host/service is created in the Dashboard, monitoring result being mapped on Icinga Service.

Import and Export Configuration

The Tornado GUI provides multiple ways to import and export the whole configuration or just a subset of it.

Export Configuration

You have three possibilities to export Tornado configuration or part of it:

  1. entire configuration: select the root node from the Processing Tree View and click on the export button to download the entire configuration

  2. a node (either a ruleset or a filter): select the node from the Processing Tree View and click on the export button to download the node and its sub-nodes

  3. a single rule: navigate to the rules table, select a rule, and click on the export button


You can backup and download Tornado configuration by exporting the entire configuration.

Import Configuration

You can use the import feature to upload to NetEye a previously downloaded configuration, new custom rules, or even the configuration from another NetEye instance.

When clicking on the import button a popup will appear with the following fields:

  • Node File: the file containing the configuration


    When importing a single rule the field will be labeled as Rule File.

  • Replace whole configuration?: If selected, the imported configuration will replace the root node and all of its sub-nodes.


    You can restore a previous Tornado configuration by selecting this option.

  • Parent Node: The parent node where to add the imported configuration, by default it is set to the currently selected node.


When a node or a rule with the same name of an already existing one is imported, the name of the new node/rule will be suffixed with _imported.

Tornado Collectors

Tornado provides a number of preconfigured Collectors that handle inputs from various data sources:

  1. Email Collector

  2. Rsyslog Collector

  3. Webhook Collector

  4. Nats JSON Collector

  5. Icinga 2 Collector

  6. SNMP Trap Daemon Collector

  7. SMS Collector

Most of the Tornado Collectors are functioning out of the box and do not require manual configuration. Follow Tornado Collectors for more details. However, there are some, that may be configured to work in accordance with your needs.

Tornado Webhook Collector

The Webhook Collector is a standalone HTTP server built on actix-web that listens for REST calls from a generic webhook, generates Tornado Events from the webhook JSON body, and sends them to the Tornado Engine.

On startup, it creates a dedicated REST endpoint for each configured webhook. Calls received by an endpoint are processed by the embedded JMESPath Syntax that uses them to produce Tornado Events. In the final step, the Events are forwarded to the Tornado Engine through the configured connection type.

You must configure a JSON file for each webhook in the /neteye/shared/tornado_webhook_collector/conf/webhooks/ folder.

For each webhook, you must provide three values in order to successfully create an endpoint:

  • id: The webhook identifier. This will determine the path of the endpoint; it must be unique per webhook.

  • token: A security token that the webhook issuer has to include in the URL as part of the query string (see the example at the bottom of this page for details). If the token provided by the issuer is missing or does not match the one owned by the Collector, then the call will be rejected and an HTTP 401 code (UNAUTHORIZED) will be returned.

  • collector_config: The transformation logic that converts a webhook JSON object into a Tornado Event. It consists of a JMESPath Collector configuration as described in its specific documentation.

  "id": "<webook_id>",
  "token": "<webhook_token>",
  "collector_config": {
    "event_type": "<webhook_custom_event_type>",
    "payload": {
      "source": "${@}"

Tornado Icinga 2 Collector

The Icinga 2 Collector subscribes to the Icinga 2 API event streams, generates Tornado Events from the Icinga 2 Events, and publishes them on the Tornado Engine TCP address.

The Icinga 2 Collector executable is built on actix.

On startup, it connects to an existing Icinga 2 Server API and subscribes to user defined Event Streams. Each Icinga 2 Event published on the stream, is processed by the embedded jmespath Collector that uses them to produce Tornado Events which are, finally, forwarded to the Tornado Engine’s TCP address.

The streams in /neteye/shared/tornado_icinga2_collector/conf/streams/ are to be configured as JSON files.

More than one stream subscription can be defined. For each stream, you must provide two values in order to successfully create a subscription:

  • stream: the stream configuration composed of:

    • types: An array of Icinga 2 Event types;

    • queue: A unique queue name used by Icinga 2 to identify the stream;

    • filter: An optional Event Stream filter. Additional information about the filter can be found in the official documentation.

  • collector_config: The transformation logic that converts an Icinga 2 Event into a Tornado Event. It consists of a JMESPath Collector configuration as described in its specific documentation.

For all Icinga 2 events

  "stream": {
    "types": ["CheckResult",
    "queue": "icinga2_AllEvents_all"
  "collector_config": {
    "event_type": "icinga2_AllEvents_all",
    "payload": {
      "response": "${@}"

For check result events

  "stream": {
    "types": ["CheckResult"],
    "queue": "icinga2_CheckResult_all"
  "collector_config": {
    "event_type": "icinga2_CheckResult_all",
    "payload": {
      "response": "${@}"

For notification events

  "stream": {
    "types": ["Notification"],
    "queue": "icinga2_Notification_all"
  "collector_config": {
    "event_type": "icinga2_Notification_all",
    "payload": {
      "response": "${@}"

For statechange events

  "stream": {
    "types": ["StateChange"],
    "queue": "icinga2_StateChange_all"
  "collector_config": {
    "event_type": "icinga2_StateChange_all",
    "payload": {
      "response": "${@}"


Based on the Icinga 2 Event Streams documentation, multiple HTTP clients can use the same queue name as long as they use the same event types and filter.

Email Collector

When the Email Collector receives a valid MIME email message as input, it parses it and produces a Tornado Event with the extracted data.

In order to forward the received Email Events to NetEye, relay all email sent to eventgw@domain on your SMTP server to the NetEye 4 Tornado Email Collector.

With the attachments included, the ones that are text files will be in plain text, otherwise they will be encoded in base64.

For example, passing this email with attachments:

From: "Francesco" <>
Subject: Test for Mail Collector - with attachments
To: "Benjamin" <>,
 francesco <>
Date: Sun, 02 Oct 2016 07:06:22 -0700 (PDT)
MIME-Version: 1.0
Content-Type: multipart/mixed;
Content-Language: en-US

This is a multi-part message in MIME format.
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit

<html>Test for Mail Collector with attachments</html>

Content-Type: application/pdf;
Content-Transfer-Encoding: base64
Content-Disposition: attachment;


Content-Type: text/plain; charset=UTF-8;
Content-Transfer-Encoding: base64
Content-Disposition: attachment;


will generate this Event:

  "type": "email",
  "created_ms": 1554130814854,
  "payload": {
    "date": 1475417182,
    "subject": "Test for Mail Collector - with attachments",
    "to": "\"Benjamin\" <>, francesco <>",
    "from": "\"Francesco\" <>",
    "cc": ",",
    "body": "<html>Test for Mail Collector with attachments</html>",
    "attachments": [
        "filename": "sample.pdf",
        "mime_type": "application/pdf",
        "encoding": "base64",
        "content": "JVBERi0xLjMNCiXi48/TDQoNCjEgMCBvYmoNCjw8DQovVHlwZSAvQ2F0YWxvZw0KT0YNCg=="
        "filename": "sample.txt",
        "mime_type": "text/plain",
        "encoding": "plaintext",
        "content": "txt file context for email Collector\n1234567890987654321\n"

Within the Tornado Event, the filename and mime_type properties of each attachment are the values extracted from the incoming email.

Instead, the encoding property refers to the content encoding in the Event itself, which is one of two types:

  • plaintext: The content is included in plain text

  • base64: The content is encoded in base64

SMS Collector

Before Tornado can correctly catch SMS, you should configure your SMS modem to send events to Tornado. You can follow the configuration procedure at SMS Modem Setup.


If you are migrating from the Eventhandler module to the Tornado SMS collector, please refer to the Migration to Tornado SMS Collector.

Tornado SMS Collector will then receive then valid SMS like:

From: 39123456789
From_TOA: 91 international, ISDN/telephone
From_SMSC: 39123456789
Sent: 23-10-09 16:06:52
Received: 23-10-09 16:06:58
Subject: GSM1
Modem: GSM1
IMSI: 222018005102877
Report: no
Alphabet: ISO
Length: 20

Example text message

and based on that will create the following Tornado Event:

  "type": "sms",
  "created_ms": 155412314854,
  "payload": {
     "sender": "+39123456789",
     "text": "Example text message"

Within the Tornado Event, the modem, sender and text properties are the values extracted from the incoming SMS.

Tornado Actions

The important part of creating a Tornado Rule is specifying the Actions that are to be carried out in the case an Event matches a specific Rule.

A selection of Actions available to be defined and configured is presented in a Rule configuration under the dedicated ‘Actions’ tab. Each individual Action type is taken care of by a particular Tornado Executor, which would trigger the associated executable instructions.

There are several Action types that may be singled out according to their logic.

  • Monitoring Actions - are carried out by the Icinga 2 Executor, Smart Monitoring Check Result Executor and the Director Executor. These Actions are meant for triggering the actual monitoring instructions, e.g. setting process check results or creating hosts.

  • Logging Actions - serve for logging data, be it an Event received from the Action, or the output of an Action. This type of actions are carried out by the Logger Executor, Archive Executor and Elasticsearch Executor.

In addition to executing a Monitoring or Logging type of action mentioned above, you can customize the processing of an Action by running custom scripts with the help of Script Executor, or loop through a set of data to execute a list of Actions for each entry with the Foreach Tool.

Below you will find a full list of Action types available to be defined in a Rule’s ‘Actions’ tab.

Smart Monitoring Check Result

SMART_MONITORING_CHECK_RESULT Action type allows to set a specific check result for a monitored object, also in the cases when the Icinga 2 object for which you want to carry out the Action does not exist. Moreover, the Smart Monitoring Check Result Executor responsible for carrying out the Action also ensures that no outdated process-check-result will overwrite newer check results already present in Icinga 2.

Note however, that the Icinga agent cannot be created live using Smart Monitoring Executor because it always requires a defined endpoint in the configuration which is not possible since the Icinga API doesn’t support live-creation of an endpoint.

To ensure that outdated check results are not processed, the Action process-check-result is carried out by the Icinga 2 with the parameters execution_start and execution_end inherited by the Action definition or set equal to the value of the created_ms property of the originating Tornado Event. Section Discarded Check Results explains how the Executor handles these cases.

The SMART_MONITORING_CHECK_RESULT action type should include the following elements in its payload:

  1. A check_result: The basic data to build the Icinga 2 process-check-result action payload. See more in the official Icinga 2 documentation.

       "exit_status": "2",
       "plugin_output": "Output message"

    The check_result should contain all mandatory parameters expected by the Icinga API except the following ones that are automatically filled by the Executor:

    • host

    • service

    • type

  2. A host: The data to build the payload which will be sent to the Icinga 2 REST API for the host creation.

       "object_name": "myhost",
       "address": "",
       "check_command": "hostalive",
       "vars": {
          "location": "Rome"
  3. A service: The data to build the payload which will be sent to the Icinga 2 REST API for the service creation (optional)

       "object_name": "myservice",
       "check_command": "ping"

Discarded Check Results

Some process-check-results may be discarded by Icinga 2 if more recent check results already exist for the target object. In this situation the Executor does not retry the Action, but simply logs an error containing the tag DISCARDED_PROCESS_CHECK_RESULT in the configured Tornado Logger.

The log message showing a discarded process-check-result will be similar to the following excerpt, enclosed in an ActionExecutionError:

SmartMonitoringExecutor - Process check result action failed with error ActionExecutionError {
  message: "Icinga2Executor - Icinga2 API returned an unrecoverable error. Response status: 500 Internal Server Error.
    Response body: {\"results\":[{\"code\":409.0,\"status\":\"Newer check result already present. Check result for 'my-host!my-service' was discarded.\"}]}",
  can_retry: false,
  code: None,
  data: {
    "payload":{"execution_end":1651054222.0,"execution_start":1651054222.0,"exit_status":0,"plugin_output":"Some process check result","service":"my-host!my-service","type":"Service"},


Tornado Actions for creating hosts and services are available under the DIRECTOR Action type.

The following elements of an Action are to be specified for the Director Executor to extract data from a Tornado Action and prepare it to be sent to the Icinga Director REST API:

  1. An action_name: create_host, create_service would create an object of type host or service in the Director respectively. See more in the official Icinga 2 documentation.

  2. An action_payload (optional): The payload of the Director action.

       "object_type": "object",
       "object_name": "my_host_name",
       "address": "",
       "check_command": "hostalive",
       "vars": {
          "location": "Bolzano"
  3. An icinga2_live_creation (optional): Boolean value, which determines whether to create the specified Icinga Object also in Icinga 2.

Icinga 2

The ICINGA 2 Action type allows to define one the existing Icinga 2 actions.

For the Icinga 2 Executor to extract data from a Tornado Action and prepare it to be sent to the Icinga 2 API, the following parameters are to be specified in the Action’s payload:

1. An icinga2_action_name: The Icinga 2 action to perform.

  1. An icinga2_action_payload (optional): should contain all mandatory parameters expected by the specific Icinga 2 action.

       "exit_status": "2",
       "filter": "\"${_variables.hostname}\"",
       "plugin_output": "${event.payload.plugin_output}",
       "type": "Host"


For troubleshooting purposes the LOGGER Action can be used to log all events that match a specific rule. The Logger Executor behind this Action type logs received Actions: it simply outputs the whole Action body to the standard log at the info level.


The ARCHIVE Action type allows you to write the Events from the received Tornado Actions to a file with the help of a dedicated Archive Executor.

Requirements and Limitations

The archive Executor can only write to locally mounted file systems. In addition, it needs read and write permissions on the folders and files specified in its configuration.


The archive Executor has the following configuration options:

  • file_cache_size: The number of file descriptors to be cached. You can improve overall performance by keeping files from being continuously opened and closed at each write.

  • file_cache_ttl_secs: The Time To Live of a file descriptor. When this time reaches 0, the descriptor will be removed from the cache.

  • base_path: A directory on the file system where all logs are written. Based on their type, rule Actions received from the Matcher can be logged in subdirectories of the base_path. However, the archive Executor will only allow files to be written inside this folder.

  • default_path: A default path where all Actions that do not specify an archive_type in the payload are logged

  • paths: A set of mappings from an archive_type to an archive_path, which is a subpath relative to the base_path. The archive_path can contain variables, specified by the syntax ${parameter_name}, which are replaced at runtime by the values in the Action’s payload.

The archive path serves to decouple the type from the actual subpath, allowing you to write Action rules without worrying about having to modify them if you later change the directory structure or destination paths.

As an example of how an archive_path is computed, suppose we have the following configuration:

base_path =  "/tmp"
default_path = "/default/out.log"
file_cache_size = 10
file_cache_ttl_secs = 1

"type_one" = "/dir_one/file.log"
"type_two" = "/dir_two/${hostname}/file.log"

and these three Rule’s actions:

  1. action_one: “archive_type”: “type_one”, “event”: “__the_incoming_event__”

  2. action_two: “archive_type”: “type_two”, “event”: “__the_incoming_event__”

  3. action_three: “archive_type”: “” “event”: “__the_incoming_event__”


  • action_one will be archived in /tmp/dir_one/file.log

  • action_two will be archived in /tmp/dir_two/net-test/file.log

  • action_three will be archived in /tmp/default/out.log

The archive Executor expects an Action to include the following elements in the payload:

  1. An event: The Event to be archived should be included in the payload under the key event

  2. An archive type (optional): The archive type is specified in the payload under the key archive_type

When an archive_type is not specified, the default_path is used (as in action_three). Otherwise, the Executor will use the archive_path in the paths configuration corresponding to the archive_type key (action_one and action_two).

When an archive_type is specified but there is no corresponding key in the mappings under the paths configuration, or it is not possible to resolve all path parameters, then the Event will not be archived. Instead, the archiver will return an error.

The Event from the payload is written into the log file in JSON format, one event per line.


The ELASTICSEARCH Action type allows you to extract data from a Tornado Action and send it to Elasticsearch.

The Elasticsearch Executor behind this Action type expects a Tornado Action to include the following elements in its payload:

  1. An endpoint : The Elasticsearch endpoint which Tornado will call to create the Elasticsearch document (i.e. http://localhost:9200),

  2. An index : The name of the Elasticsearch index in which the document will be created (e.g. tornado-example),

  3. An data: The content of the document that will be sent to Elasticsearch

       "user" : "kimchy",
       "post_date" : "2009-11-15T14:12:12",
       "message" : "trying out Elasticsearch"
  4. (optional) An auth: a method of authentication, see below

The Elasticsearch Executor will create a new document in the specified Elasticsearch index for each action executed; also the specified index will be created if it does not already exist.

In the above json document, no authentication is specified, therefore the default authentication method created during the Executor creation is used. This method is saved in a tornado configuration file (elasticsearch_executor.toml) and can be overridden for each Tornado Action, as described in the next section.

Elasticsearch authentication

When the Elasticsearch Action is created, a default authentication method can be specified and will be used to authenticate to Elasticsearch, if not differently specified by the action. On the contrary, if a default method is not defined at creation time, then each action that does not specify an authentication method will fail.

To use a specific authentication method the action should include the auth field with either of the following authentication types: None or PemCertificatePath, like shown in the following examples.

  • None: the client connects to Elasticsearch without authentication


       "type": "None"
  • PemCertificatePath: the client connects to Elasticsearch using the PEM certificates read from the local file system. When this method is used, the following information must be provided:

    • certificate_path: path to the public certificate accepted by Elasticsearch

    • private_key_path: path to the corresponding private key

    • ca_certificate_path: path to CA certificate needed to verify the identity of the Elasticsearch server


       "type": "PemCertificatePath",
       "certificate_path": "/path/to/tornado/conf/certs/tornado.crt.pem",
       "private_key_path": "/path/to/tornado/conf/certs/private/tornado.key.pem",
       "ca_certificate_path": "/path/to/tornado/conf/certs/root-ca.crt"


SCRIPT Action type allows to run custom shell scripts on a Unix-like system in order to customize the Action according to your needs.

In order to be correctly processed by the Script Executor, an Action should provide two entries in its payload: the path to a script on the local filesystem of the Executor process, and all the arguments to be passed to the script itself.

The script path is identified by the payload key script.

neteye# ./usr/share/scripts/

It is important to verify that the Executor has both read and execute rights at that path.

The script arguments are identified by the payload key args; if present, they are passed as command line arguments when the script is executed.

Foreach Tool

The Foreach Tool loops through a set of data and executes a list of actions for each entry; it extracts all values from an array of elements and injects each value to a list of action under the item key.

There are two mandatory configuration entries in its payload:

  • target: the array of elements, e.g. ${event.payload.list_of_objects}

  • actions: the array of action to execute

In order to access the item of the current cycle in the actions inside the Foreach Tool, use the variable ${item}.

Common Logger

The tornado_common_logger crate contains the logger configuration for the Tornado components.

The configuration is based on three entries:

  • level: A list of comma separated logger verbosity levels. Valid values for a level are: trace, debug, info, warn, and error. If only one level is provided, this is used as global logger level. Otherwise, a list of per package levels can be used. E.g.:

    • level=info: the global logger level is set to info

    • level=warn,tornado=debug: the global logger level is set to warn, the tornado package logger level is set to debug

  • stdout-output: A boolean value that determines whether the Logger should print to standard output. Valid values are true and false.

  • file-output-path: An optional string that defines a file path in the file system. If provided, the Logger will append any output to that file.

The configuration subsection logger.tracing_elastic_apm allows to configure the connection to Elastic APM for the tracing functionality. The following entries can be configured:

  • apm_output: Whether the Logger data should be sent to the Elastic APM Server. Valid values are true and false.

  • apm_server_url: The URL of the Elastic APM Server.

  • (Optional) the ID of the API Key for authenticating to the Elastic APM server.

  • apm_server_api_credentials.key: (Optional) the key of the API Key for authenticating to the Elastic APM server. If and apm_server_api_credentials.key are not provided, they will be read from the file <config_dir>/apm_server_api_credentials.json

  • exporter.max_queue_size: (Optional) The maximum queue size of the tracing batch exporter to buffer spans for delayed processing. Defaults to 65536.

  • exporter.scheduled_delay_ms: The delay interval in milliseconds between two consecutive exports of batches. Defaults to 5000 (5 seconds).

  • exporter.max_export_batch_size: The maximum number of spans to export in a single batch. Defaults to 512.

  • exporter.max_export_timeout_ms: The time (in milliseconds) for which the export can run before it is cancelled. Defaults to 30000 (30 seconds).

In Tornado executables, the Logger configuration is usually defined with command line parameters managed by structopt. In that case, the default level is set to warn, stdout-output is disabled and the file-output-path is empty.

For example:

./tornado --level=info --stdout-output --file-output-path=/tornado/log

Advanced Configuration

Below you will be able to find a list of configuration cases which on top of the basic Tornado Configuration allow to customize your experience of using Tornado within your NetEye installation.

Thread Pool Configuration

Even if the default configuration should suit most of the use cases, in some particular situations it could be useful to customise the size of the internal queues used by Tornado. Tornado utilizes these queues to process incoming events and to dispatch triggered actions.

Tornado uses a dedicated thread pool per queue; the size of each queue is by default equal to the number of available logical CPUs. Consequently, in case of an action of type script, for example, Tornado will be able to run in parallel at max as many scripts as the number of CPUs.

This default behaviour can be overridden by providing a custom configuration for the thread pools size. This is achieved through the optional tornado_pool_config entry in the tornado.daemon section of the Tornado.toml configuration file.

Example of Thread Pool’s Dynamical Configuration

thread_pool_config = {type = "CPU", factor = 1.0}

In this case, the size of the thread pool will be equal to (number of available logical CPUs) multiplied by (factor) rounded to the smallest integer greater than or equal to a number. If the resulting value is less than 1, then 1 will be used be default.

For example, if there are 16 available CPUs, then:

  • {type: "CPU", factor: 0.5} => thread pool size is 8

  • {type: "CPU", factor: 2.0} => thread pool size is 32

Example of Thread Pool’s Static Configuration

thread_pool_config = {type = "Fixed", size = 20}

In this case, the size of the thread pool is statically fixed at 20. If the provided size is less than 1, then 1 will be used be default.

Retry Strategy Configuration

Tornado allows the configuration of a global retry strategy to be applied when the execution of an Action fails.

A retry strategy is composed by:

  • retry policy: the policy that defines whether an action execution should be retried after an execution failure;

  • backoff policy: the policy that defines the sleep time between retries.

Valid values for the retry policy are:

  • {type = "MaxRetries", retries = 5} => A predefined maximum amount of retry attempts. This is the default value with a retries set to 20.

  • {type = "None"} => No retries are performed.

  • {type = "Infinite"} => The operation will be retried an infinite number of times. This setting must be used with extreme caution as it could fill the entire memory buffer preventing Tornado from processing incoming events.

Valid values for the backoff policy are:

  • {type = "Exponential", ms = 1000, multiplier = 2 }: It increases the back off period for each retry attempt in a given set using the exponential function. The period to sleep on the first backoff is the ms; the multiplier is instead used to calculate the next backoff interval from the last. This is the default configuration.

  • {type = "None"}: No sleep time between retries. This is the default value.

  • {type = "Fixed", ms = 1000 }: A fixed amount of milliseconds to sleep between each retry attempt.

  • {type = "Variable", ms = [1000, 5000, 10000]}: The amount of milliseconds between two consecutive retry attempts.

    The time to wait after ‘i’ retries is specified in the vector at position ‘i’.

    If the number of retries is bigger than the vector length, then the last value in the vector is used. For example:

    ms = [111,222,333] -> It waits 111 ms after the first failure, 222 ms after the second failure and then 333 ms for all following failures.

Example of a complete Retry Strategy configuration

retry_strategy.retry_policy = {type = "Infinite"}
retry_strategy.backoff_policy = {type = "Variable", ms = [1000, 5000, 10000]}

When not provided explicitly, the following default Retry Strategy is used:

retry_strategy.retry_policy = {type = "MaxRetries", retries = 20}
retry_strategy.backoff_policy = {type = "Exponential", ms = 1000, multiplier = 2 }