User Guide


This is the most important section of the user guide, because it describes how to configure the various services running on NetEye and the modules installed.



In Icinga2, configuration changes are stored in working memory until they are deployed, i.e., the Icinga2 configuration is generated and distributed to all agents in the zone. Initiating a deploy action will check configuration validity, recording the details of the outcome. There are two principal elements, both reachable from the main Director menu:

  • Deployment: Manage the actions around deploying a new configuration.

  • Activity Log: Look at the deployment history, and carry out actions on that history.

Deploying a Configuration

To deploy a modified configuration, go to Director > Deployments. There you will see the Deployments tab (Fig. 18) that shows all recent successful (green check) and failed (red ‘X’) deployment attempts along with the date and time of those actions. Clicking on the link will take you to the “Deployment details” panel that additionally contains any warnings that may have been issued, and a ‘Configuration’ link that will show you which configuration files were modified when the deployment occurred.

The deployments outcome panel

Fig. 18 The deployments outcome panel

Now click on the Render config action, which displays the “Generated config” panel, a summary of the newly generated configuration (Fig. 19). This panel contains a list of the files involved in the configuration changes, including a summary of the number of objects per file by type (objects, templates, or apply rules), and the new size of that file.

Deployment outcome panel

Fig. 19 Deployment outcome panel

You will now see three new actions:

  • Deploy pending changes: Implement the deployment, which distributes the configuration to all Icinga2 agents. You can distribute the deployment even if there are no changes to make.

  • Last related activity: Show differences between the current configuration and the most recent configuration before the current one.

  • Diff with other config: Compare any two configurations using their unique identifiers (the numbers in parentheses in Fig. 18). The current configuration is inserted by default.

Activity Log

The Activity Log Panel (Fig. 20) lets you look at the history of successful deployments, and carry out actions on items in that history.

The My changes action lets you switch from showing the history of all changes to the view showing only those changes you made. You can then click on All changes to return to viewing changes made by all users.

The activity log panel

Fig. 20 The activity log panel

Each row represents a successful change to an object, coded for action type, user (“cli” indicates an automated action) and time. The action types are:

  • Create (a blue “+” icon)

  • Modify (a green “wrench” icon)

  • Delete (a red “x” icon)

A duplicate timestamp over consecutive rows indicates those objects were deployed at the same time. Clicking on the modify action type in particular will take you to the Diff panel (Fig. 21) that will detail exactly what changes were made.

Showing the differences between before and after configurations

Fig. 21 Showing the differences between before and after configurations

Once you have completed a successful deployment of monitoring objects in Director, you can then go to the host monitoring panel (i.e.,g click on a host under Icinga Director ‣ Hosts ‣ Hosts) to check on the success of the overall monitoring configuration.

Satellite Nodes

When monitoring large numbers of servers and devices, especially in multiple remote locations, it makes sense to offload the work of the main NetEye monitoring node (the “master”) onto a group of helper nodes (“satellites”). Icinga 2 allows you to implement secure communication between these high-level nodes by creating encrypted certificates based on each satellite’s host name.

In this How To you will learn the procedure for adding a satellite node to your existing monitoring environment, and for renaming an existing satellite node.

The example below assumes you already have a working NetEye installation with an existing master node named “neteye-master”. We also assume that you have just installed and configured a new satellite node named “” by following the instructions at the Initial Configuration page.

Step #1: Prepare the Satellite Node

On the node that will serve as the satellite node, run the node wizard command in a shell:

[satellite]# icinga2 node wizard

You will need to fill in the following details when requested by the node wizard script:

  • Satellite/client setup? Y

  • Common name:* (FQDN)

  • Parent endpoint: neteye-master (hostname only)

  • Parent connection: Y

  • Master/satellite endpoint host: (IP address of neteye-master)

  • Master/satellite endpoint port: 5665

  • Add more? N

  • Certificate information correct? Y

  • Request ticket generated on master: Use the output of this command run on the master node:

    [master]# icinga2-master pki ticket --cn ''

  • Accept config from parent node? Y

  • Accept commands from parent node? Y

And then accept the remaining proposed defaults. If successful, you will see the message “Done.” in green.

Next, restart the satellite’s icinga2 service:

[satellite]# systemctl restart icinga2

Renaming a Satellite Node

The satellite constructed above is named “”. Suppose though that you are reorganizing your satellite names because you want to add a new one and the names should reflect how the various satellite zones are different. This is not a simple procedure because the hostnames of the endpoints are used to create the certificates necessary for secure communication. Whenever the certificate is checked (e.g., whenever a connection is established between master and satellite), the check will fail if the hostnames do not match.

Step #2: Regenerate the Certificates

Now we can regenerate the satellite’s certificate so that it will match the new name and sign it with the master’s certificate.

In the master, run the following command (be sure to use our new name) to create the ticket needed by the satellite:

[master]# icinga2-master pki ticket --cn 'satellite-europe-1'

Then on the satellite, generate a new certificate and retrieve the master certificate (you may need to use the master’s IP address instead of its common name):

[satellite]# cd /neteye/local/icinga2/data/lib/icinga2/certs/
[satellite]# icinga2 pki new-cert --cn satellite-europe-1 --key satellite-europe-1.key --cert satellite-europe-1.crt
[satellite]# icinga2 pki save-cert --host --trustedcert neteye-master.crt

Next, temporarily change the permissions and sign the certificate (still on the satellite, but using the newly generated ticket with the new name):

[satellite]# chown icinga:icinga ca.crt
[satellite]# icinga2 pki request --ticket $TICKET --host --key satellite-europe-1.key --cert satellite-europe-1.crt --trustedcert neteye-master.crt --ca ca.crt
[satellite]# chown root: ca.crt
Step #3: Restart the Icinga Services

Returning to the master, restart the master icinga2 service:

[master]# systemctl restart icinga2-master

And then again to the satellite, restart the icinga2 service:

[satellite]# systemctl restart icinga2
Step #4: Run the Kickstart Wizard

Finally, on the master’s web interface import the new configuration by clicking on “Run Import” at Icinga Director > Icinga Infrastructure > Kickstart Wizard. Unlike creating a new satellite, you do not need to redeploy within Icinga Director when you rename it. At this point, your new configuration should be working properly with the new satellite name.

As before, you can now verify that the configuration was successful by checking that both the new master and satellite endpoints (“neteye-master” and “satellite-europe-1” in this example) are listed here: Icinga Director > Icinga Infrastructure > Endpoints. The endpoint name on the satellite will not change.

Agent Nodes

Icinga2 packages

Icinga2 packages are provided for different OS/distributions via the NetEye repositories for Icinga2 agent installation. Specifically, we support

Debian derivatives:
  • Debian Buster

  • Debian Jessie

  • Debian Stretch

  • Ubuntu Xenial

  • Ubuntu Bionic

  • Ubuntu Eoan

  • Ubuntu Focal

  • Ubuntu Groovy

Red Hat derivatives:
  • CentOS 6

  • CentOS 7

  • CentOS 8

  • Fedora 29

  • Fedora 30

  • Fedora 31

  • Fedora 32

SUSE derivatives:
  • OpenSuse 5.0

  • OpenSuse 15.1

  • SLES 12.4

  • SLES 12.5

  • SLES 15.0

  • SLES 15.1

  • SLES 15.2

and Windows.

Note In order to install Icinga2 packages you need to have the boost libraries installed (version 1.66.0 or newer) or available via the default package manager.

Icinga2 repository versioning

You must use Icinga2 packages provided by the NetEye repositories instead of the official Icinga2 packages. From 4.16 onwards, icinga2 agents are version specific both for the NetEye version and for the monitored operating system version. You can modify package URLs accordingly. If you are downloading packages for 4.<neteye_minor>, you need to change neteye-x.x-icinga2-agents with neteye-4.<neteye_minor>-icinga2-agents in below packages urls.

Add the NetEye repository for Icinga2 packages

This section will explain how to add the dedicated NetEye repository for Icinga2 packages in different OSs and distributions (e.g. Ubuntu, CentOS, SUSE), thus supporting the installation of an Icinga2 agent via the default package manager installed in the OS.

URL repository follow this syntax::<distribution>-<codename_or_version>/neteye-4.<neteye_minor>-icinga2-agents/

Icinga2 RPM repository

To add the repository that provides the Icinga2 RPM packages (e.g. CentOS, SUSE, Fedora) you have to add a new repository definition to your system.

Let us suppose that you need to add the new repository definition on a CentOS 7 machine, which is monitored via NetEye 4.16. You can add the repo definition in a file neteye-icinga2-agent.repo:

name=NetEye Icinga2 Agent Packages

Please note that the location of this file will change according with the distribution used. For example, on Fedora and CentOS installations the default repo definition directory is /etc/yum.repos.d/, while SUSE will use /etc/zypp/repos.d/.

Once the new repository has been added, you need to load the new repository data by running yum update.

Icinga2 DEB repository

To add the Icinga2 agent repository on Ubuntu or Debian systems you have to create the file neteye-icinga2-agent.list in the directory /etc/apt/sources.list.d/.

For example, to add the repository on a Ubuntu 20.04 Focal Fossa you have to create a file with the following content:

"deb [trusted=yes] stable main"

Finally, run apt update to update the repo data.

Icinga2 windows packages

Get the Icinga2 Agent for Windows accessing the URL below and downloading the .msi file:
Install Icinga2

To install Icinga2, follow Icinga2 Documentation Icinga2 requires boost libraries to work properly. Ensure that the libraries are also installed on the system.

To install windows msi on agent, follow Icinga2 Windows Agent Installation official document.

Working with Icinga 2 Agents can be quite tricky, as each Agent needs its own Endpoint and Zone definition, correct parent, peering host and log settings. There may always be reasons for a completely custom-made configuration. However, I’d strongly suggest using the Director- assisted variant. It will save you a lot of headaches.


Agent settings are not available for modification directly on a host object. This requires you to create an “Icinga Agent” template. You could name it exactly like that; it’s important to use meaningful names for your templates.

Create an Agent template

Fig. 22 Create an Agent template

As long as you’re not using Satellite nodes, a single Agent zone is all you need. Otherwise, you should create one Agent template per satellite zone. If you want to move an Agent to a specific zone, just assign it the correct template and you’re all done.


Well, create a host, choose an Agent template, that’s it:

Create an Agent-based host

Fig. 23 Create an Agent-based host

Once you import the “Icinga Agent” template, you’ll see a new “Agent” tab. It tries to assist you with the initial Agent setup by showing a sample config:

Agent instructions 1

Fig. 24 Agent instructions 1

The preview shows that the Icinga Director would deploy multiple objects for your newly created host:

Agent preview

Fig. 25 Agent preview

Create Agent-based services

Similar game for services that should run on your Agents. First, create a template with a meaningful name. Then, define that Services inheriting from this template should run on your Agents.

Agent-based service

Fig. 26 Agent-based service

Please do not set a cluster zone, as this would rarely be necessary. Agent-based services will always be deployed to their Agent’s zone by default. All you need to do now for services that should be executed on your Agents is importing that template:

Agent-based load check

Fig. 27 Agent-based load check

Config preview shows that everything works as expected:

Agent-based service preview

Fig. 28 Agent-based service preview

Monitored Objects

Managing Fields

This example wants to show you how to make use of the Array data type when creating fields for custom variables. First, please got to the Dashboard and choose the Define data fields dashlet:

Dashboard - Define data fields

Fig. 29 Dashboard - Define data fields

Then create a new data field and select Array as its data type:

Define data field - Array

Fig. 30 Define data field - Array

Then create a new Host template (or use an existing one):

Define host template

Fig. 31 Define host template

Now add your formerly created data field to your template:

Add field to template

Fig. 32 Add field to template

That’s it, now you are ready to create your first corresponding host. Once you add your formerly created template, a new form field for your custom variable will show up:

Create host with given field

Fig. 33 Create host with given field

Have a look at the config preview, it will show you how your Array-based custom variable will look like once deployed:

Host config preview with Array

Fig. 34 Host config preview with Array

Data Fields example: SNMP

Ever wondered how to provide an easy to use SNMP configuration to your users? That’s what we’re going to show in this example. Once completed, all your Hosts inheriting a specific (or your “default”) Host Template will provide an optional SNMP version field.

In case you choose no version, nothing special will happen. Otherwise, the host offers additional fields depending on the chosen version. Community String for SNMPv1 and SNMPv2c, and five other fields ranging from Authentication User to Auth and Priv types and keys for SNMPv3.

Your services should now be applied not only based on various Host properties like Device Type, Application, Customer or similar - but also based on the fact whether credentials have been given or not.

Prepare required Data Fields

As we already have learned, Fields are what allows us to define which custom variables can or should be defined following which rules. We want SNMP version to be a drop-down, and that’s why we first define a Data List, followed by a Data Field using that list:

Create a new Data List
Create a new Data List

Fig. 35 Create a new Data List

Fill the new list with SNMP versions
Fill the new list with SNMP versions

Fig. 36 Fill the new list with SNMP versions

Create a corresponding Data Field
Create a Data Field for SNMP Versions

Fig. 37 Create a Data Field for SNMP Versions

Next, please also create the following elements:

  • a list SNMPv3 Auth Types providing MD5 and SHA

  • a list SNMPv3 Priv Types providing at least AES and DES

  • a String type field snmp_community labelled SNMP Community

  • a String type field snmpv3_user labelled SNMPv3 User

  • a String type field snmpv3_auth labelled SNMPv3 Auth (authentication key)

  • a String type field snmpv3_priv labelled SNMPv3 Priv (encryption key)

  • a Data List type field snmpv3_authprot labelled SNMPv3 Auth Type

  • a Data List type field snmpv3_privprot labelled SNMPv3 Priv Type

Please do not forget to add meaningful descriptions, telling your users about in-house best practices.

Assign your shiny new Fields to a Template

I’m using my default Host Template for this, but one might also choose to provide SNMP version on Network Devices. Should Network Device be a template? Or just an option in a Device Type field? You see, the possibilities are endless here.

This screenshot shows part of my assigned Fields:

SNMP Fields on Default Host

Fig. 38 SNMP Fields on Default Host

While I kept SNMP Version optional, all other fields are mandatory.

Use your Template

As soon as you choose your template, a new field is shown:

Choose SNMP version

Fig. 39 Choose SNMP version

In case you change it to SNMPv2c, a Community String will be required:

Community String for SNMPv2c

Fig. 40 Community String for SNMPv2c

Switch it to SNMPv3 to see completely different fields:

Auth and Priv properties for SNMPv3

Fig. 41 Auth and Priv properties for SNMPv3

Once stored please check the rendered configuration. Switch the SNMP versions forth and back, and you should see that filtered fields will also remove the corresponding values from the object.

Managing Templates

Templates allow you to inherit properties from parent objects to their children, allowing you to add and change configurations for hundreds of monitored objects with a single click. In Icinga2, every monitored object must have at least one parent template. Thus before adding hosts and services, you must first create host templates and service templates, respectively.

Host Templates

To create a new host template, go to Icinga Director > Host objects > Host Templates. If you are starting with an empty installation, you will see a blank panel with actions labelled “back”, “Add” and “Tree”. Otherwise, you will also see the pre-installed NetEye host templates. You can click “Add” to create a new template as shown on the left side of Fig. 42. Each template will have its own row in the “Template Name” table, and you can use the “Tree” action to switch between Table (flat) and Tree (inheritance) views of the templates.

In an empty installation, the first template will automatically be the top-level host template. Otherwise you will see a “+” icon to the right of the template’s row that will allow you to create a monitored host that is an instance of that host template. The circular arrow icon instead will show you the history panel for that template and let you initiate deployment if desired.

Once you click “Add”, the host template panel will appear to the right. This panel can be used both to add a new template as well as modify an existing template. In the latter case, the green button at the bottom of the panel will change from “Add” to “Store”. At the top is a “Deploy” action if you prefer to immediately write out the template configuration, and a “Clone” action to create a copy of an already-created template.

To create a new host template, fill in the following fields (in general you can click on the title of a field to see instructions in a dialog at the bottom of the panel):

  • Main Properties:

    • Name: The name of the template which will appear in the template panel and can be referenced when defining hosts.

    • Imports: The parent template of the current template if it is not the root.

    • Groups: Here you can add a host group if you have defined one.

    • Check command: The default check command for the host as a whole (typically “hostalive” or “ping”)

  • Custom properties: If you have defined custom fields in the “Fields” tab at the top, they will appear here, otherwise this section will not be visible.

  • Check execution: Here you can define the parameters for when a command check is run.

  • Additional properties: Allows you to set URLs and an icon which will appear in the monitoring view.

  • Agent and zone settings: Allows you to choose whether the host uses agent-based monitoring or not. For more information, see the Active Monitoring section, or the official documentation, which describes distributed monitoring setups in great detail.

Finally, click on “Add” or “Store” to store the template in working memory. You must deploy the template to effect any changes in your monitoring environment.

Creating a host template

Fig. 42 Creating a host template

Host templates support inheritance, so we can arrange host templates in an hierarchy. The host templates panel at the left of Fig. 43 shows the Tree view of all host templates. In the example in the template panel to the right, the template for “generic_host_agent”, you can see that the “Imports” field has been set to “generic_host”, and in fact in the host templates panel generic_host_agent can be found under generic_host in the hierarchy.

Because this template inherits from another, the default values for checks are inherited from the values we filled in the generic_host template in Fig. 42. In addition, the source of inheritance is mentioned for each field. If we were to override some of the values in this template, and then create a third host template under this one, then the fields in the third host template would reflect whether they were inherited from generic_host or generic_host_template.

There is a subtle case, however, in which inheritance is not working and involves the host group. We clarify this with an example.

Suppose a ACME_host_template host template exists, which has defined a group ACME-all-hosts. We define now a template ACME-host-Bolzano-template which inherits ACME_host_template (and therefore also group ACME-all-hosts). Finally, we create also a Role which has a director/filter/hostgroups restriction on the ACME-all-hosts host group.

At this point, against the expectation, users with non-administrative access and possessing this Role, when adding a new host using template ACME-host-Bolzano-template will not be able to see that the host they create are member of group ACME-all-hosts.

In order to let these hosts appear as part of the ACME-all-hosts host group, you need to assign the host group ACME-all-hosts directly to the ACME-host-Bolzano-template host template.

Tree view of an intermediate host template

Fig. 43 Tree view of an intermediate host template

Beyond the Host tab, there are five additional tabs which are described briefly here:

  • Services: Add a service template or service set that will be inherited by (1) every host template under this template and (2) host defined by this template.

  • Preview: View the configuration as it will be written out for Icinga2.

  • History: See a list of past deployments of this template. By clicking on “modify”, you can see what changed from the previous version, see the complete previous version, and you can even restore the previous version.

  • Fields: Create custom fields (and variables for their values) that can be filled in on the host template under the section “Custom properties”. The custom field panel will let you name the field, force a field’s value to be set, and determine whether to show a field based on a user-definable condition.

  • Agent: Create a self-service API key that will allow Icinga2 to integrate Icinga agents installed on the monitored hosts.

Command Templates

To create a new command template, go to Icinga Director > Commands > Command Templates. The command templates panel (Fig. 44) is similar to that for hosts: it lists all command templates either in a table or tree view, and allows you to add a new command template.

Unlike hosts and services, you can create commands that do not inherit from a command template. However, command templates can be especially useful when you need to share a common set of parameters across many similar command definitions. Also unlike hosts and services, you can inherit one command check from another command check without every creating a command template.

There is also a third category of commands called “External Commands”, which is a library of pre-defined commands not created in Director, and thus not modifiable (although you can add new parameters). External commands include ping, http and mysql.

Clicking on “Add” brings up the command template creation panel on the right. As with host templates, left-clicking on a field name will display a brief description at the bottom of the screen, the green “Add” or “Store” button at the bottom of the panel will change the configuration in memory, the “Deploy” action lets you immediately write out the template configuration, and the “Clone” action will create a copy of an existing template.

The following fields are found in the command template creation panel (Fig. 44):

  • Command type: Choose a particular command type. These are described in the official Icinga2 documentation.

    • Plugin Commands: Plugin check command, Notification plugin command, and Event plugin command.

    • Internal Commands: Icinga check command, Icinga cluster check command, Icinga cluster zone check command, Ido check command, Random check command, and Crl check command.

  • Command name: Here you can define the name you prefer for the command template.

  • Imports: Choose a command template to inherit from (leave this blank to create a top-level template).

  • Command: The command to be executed, along with an optional file path.

  • Timeout: An optional timeout.

  • Disabled: This allows you to disable the given check command simultaneously for all hosts implementing this command template.

  • Zone settings: Select the appropriate cluster zone. For small monitoring setups, this value should be “master”.

Adding a new command template

Fig. 44 Adding a new command template

Like the host template panel, here there are additional tabs for Preview, History, and Fields. There is also an Arguments tab (Fig. 45) to allow you to customize the arguments for a particular command. You must create a new command arguments configuration for each argument (using pre-defined command templates can thus save you a significant amount of time).

The most important fields are (click on any field title to see its description at the bottom of the panel):

  • Name: The name of the argument as it will appear in the command, e.g. --hostname.

  • Value type: Use type String for standard variables like $host$.

  • Value: A numeric value, variable, or pre-defined function.

  • Position: The position of this argument relative to other arguments, expressed as an integer.

More information can be found at the official Icinga2 documentation.

Adding arguments to a command

Fig. 45 Adding arguments to a command

Service Templates

To create a new service template, go to Icinga Director > Monitored Services > Service Templates. The services templates panel (Fig. 46) is similar to that for hosts and commands: it lists all service templates either in a table or tree view, and allows you to add a new service template. As for the other templates, clicking the “+” icon at the right of the row will display a panel to add a monitored service as an instance of the corresponding service template, and the circular arrow action will show you that service template’s history panel (activity log).

The new service template panel (Fig. 46) asks you to fill in the following fields:

  • Name: The name of this service template.

  • Imports: Specify the parent service template, if one is desired.

  • Check command: A check to run if it is not overridden by the check for a more specific template or service object, typically from the Icinga Template Library

  • Run on agent: Specifies whether the check is active or passive.

Creating a service template

Fig. 46 Creating a service template

As Fig. 46 shows, the check commands on lower-level service templates and monitored services are typically much more detailed. Here, a custom field holds the command to be executed over an SSH connection to a host running AIX.

service templates

Fig. 47 A low-level service template.

Managing Monitoring Objects

Once you have created templates for hosts, commands and services, you can begin to create instances of monitored objects that inherit from those templates. The guide below shows how to do this manually. In addition, hosts can be imported using automated discovery methods, while services and commands can be drawn from template libraries.

Adding a Host

The New Host Panel (Fig. 48) is similar to the template panels, and is accessible from Director > Host objects > Hosts. Each row in the panel represents a single monitored host with both host name and IP address. Clicking on the host name shows the Host Configuration panel for that host to the right, and the “Add” action brings up an empty Host Configuration panel.

Adding a new host

Fig. 48 Adding a new host

Like the template panel, there are tabs for Preview, History and Agent. In addition, there is a Services tab which shows all services assigned to that host, organized by inheritance and service set. Below the tabs is a “Show” action which takes you directly to the host object’s monitoring panel (i.e.,g click on the host under Icinga Director ‣ Hosts ‣ Hosts)

The following fields are important:

  • Hostname: This should be the host’s fully qualified domain name.

  • Imports: The host template(s) to inherit from.

  • Display name: A more friendly name shown in monitoring panels which does not have to be a FQDN.

  • Host address: The host’s IP address.

  • Groups: A drop down menu to assign this host to a defined host group.

  • Inherited groups, Applied groups: Assigned host groups, organized by how the group was assigned.

  • Disabled: Temporarily remove a host from monitoring, without deleting its configuration.

  • Custom properties: Fields defined for host templates, with the ability to select a value of the pre-defined type.

The remaining fields should be set on one of the host’s parent templates.


You cannot create a host that does not inherit from at least one host template.

Adding a Command Check

The New Command Panel (Fig. 49, Director > Commands > Commands) displays one command per line, including the command name and the check command to be used (without arguments). While you can always click on the name of each field in this panel to see a description at the bottom of the web page, here is a quick summary:

  • Command Type: This is the same list as in the command template panel section.

  • Command Name: The reference name used to assign this check command to a service.

  • Imports: The parent command template(s). Unlike hosts and services, this is optional.

  • Command: The actual check command to use, without arguments.

  • Timeout: A timeout that will override an inherited timeout.

  • Disabled: Disabled commands cannot be assigned.

Adding a new command

Fig. 49 Adding a new command

As with the command template panel, there is an Arguments tab (Fig. 50) that allows you to create parameter lists for the check commands either from scratch, or by overriding defaults from an inherited command template. You must create a separate entry for each parameter, which will then appear in the table below. To edit an existing parameter, simply click anywhere on its row in that list.

Adding command arguments

Fig. 50 Adding command arguments

For example, if when executing the check in a shell you need to use a parameter like “-C” with a given value, you will need to add it as an argument. All such arguments need to be listed in the Arguments table. For an argument’s “Value” parameter, you can enter either a system variable or a custom variable, both of which are indicated by a ‘$’ both before and after the variable name. This allows you to parameterize arguments across multiple host or service templates, including with any “Custom properties” fields you have created for those templates. This way you could parameterize for instance the following very common values at the Service level, and later change them all simultaneously if desired:

  • Credentials such as SNMP community strings, or usernames and passwords for SQL login

  • Common warning and critical thresholds

  • Addresses and port numbers

  • Units

For further details about command arguments, please see the official documentation.

Adding a Service

The Service Panel (Director > Monitored Services > Single Services) lists individual services that can be assigned to monitored hosts.

The list of services in the Service Panel

Fig. 51 The list of services in the Service Panel

Click on the “Add” action to display the New Service Panel ((Fig. 52) where you can create a new service by setting the following fields:

  • Name: Give the service a unique name.

  • Imports: The parent service template(s).

  • Host: The name of at least one host or host template to which this service should be applied.

  • Groups: The name of one or more service groups to which this service should belong.

  • Disabled: Whether or not this service can be assigned.

Adding a new service

Fig. 52 Adding a new service


You cannot create a service that does not inherit from at least one service template.

Assign Service Templates to Hosts

Once you have created host and service templates, and individual hosts and services, you can assign service templates to hosts. (Only service templates can be assigned, not simple services.) There are three situations in which you can do this, and they are all performed in a similar manner:

  • Assign a service template to a host

  • Assign a service template to a host template

  • Assign a service template to a service set

For instance, to assign a service template to a host, go to Director > Host objects > Hosts and select the desired host. In the panel that appears on the right (see Figure 1), select the Services tab, then click on the “Add service” action and choose the desired service template from the “Service” drop down menu.

Adding a service template to a host

Fig. 53 Adding a service template to a host

Then under “Main properties”, choose the appropriate service or service template in the drop down under Service.

To add a service template to a host template, follow the above instructions after choosing a host template at Director > Host objects > Host Templates.

For service sets, follow Director > Services, Service Sets, select the desired service set, change to the Services tab, and click the “Add service” action, choosing the service template from the “Imports” drop down menu.

Working with Apply for rules - tcp ports example

This example wants to show you how to make use of Apply For rule for services.

First you need to define a tcp_ports data field of type Array assigned to a Host Template. Refer to Working with fields section to setup a data field. You also need to define a tcp_port data field of type String, we will associate it to a Service Template later.

Then, please go to the Dashboard and choose the Monitored services dashlet:

Dashboard - Monitored services

Fig. 54 Dashboard - Monitored services

Then create a new Service template with check command tcp:

Define service template - tcp

Fig. 55 Define service template - tcp

Then associate the data field tcp_port to this Service template:

Associate field to service template - tcp_port

Fig. 56 Associate field to service template - tcp_port

Then create a new apply-rule for the Service template:

Define apply rule

Fig. 57 Define apply rule

Now define the Apply For property, select the previously defined field tcp_ports associated to the host template. Apply For rule define a variable config that can be used as $config$, it corresponds to the item of the array it will iterate on.

Set the Tcp port property to $config$:

Add field to template

Fig. 58 Add field to template

(Side note: if you can’t see your tcp_ports property in Apply For dropdown, try to create one host with a non-empty tcp_ports value.)

That’s it, now all your hosts defining a tcp_ports variable will be assigned the Tcp Check service.

The Smart Director Module

Deploying icinga objects i.e., host/service without waiting for long hours can be a requirement in many situation. For example, you wanted to add a host that is critical for your business and also wanted to start the monitoring as soon as it is added without waiting too long (i.e., half a day) to see the changes deployed.

The Smart director module lets the NetEye user choose if he wanted to apply the changes as soon as he create/edit/delete the icinga object without executing director deploy.

How it works

Smart director uses the predefined director hooks to add the Instant Deploy flag as custom property in director object form (host/service) in a separated Smart Director Settings section. If the flag is selected by the user, director performs the further operation i.e., create/edit/delete for the object and at the same time icinga2 api will be called to apply those changes instantly without waiting for manual director deploy.

By default the Instant Deploy flag is disabled and is not shown on the form. To enable it, simply go to the module configuration (Configuration > Modules > smartdirector > Configuration) and set Instant Deploy to Enabled. The clone functionality is not supported by the instant deploy. For this reason, when the instant deploy field is set to Yes, the clone button will be disabled. To clone an object, the instant deploy must be set to No, clone the object and then, if necessary, set the instant deploy to Yes and store it.

After a restart of the icinga2-master service, a deployment will automatically start. If this deployment fails, the smart-director-objects-integrity-neteyelocal service of the neteye-local host will be set to Critical and it will be automatically set to OK as soon as a successful deployment is triggered by the user.

A series of validation checks will restrict users to use Instant Deploy option during the host or service create/modify operations for non supporting fields. For example, a user will get a clear error message below the Instant Deploy field if he tries to disable the host or service with Instant Deploy as Yes.

Table 17 Host Object Form




Non Supported Values


Create, Modify

User input


Cluster Zone


User input or inherited

Cluster Zone modification


Create, Modify

User input or inherited

Group modification



User input

Hostname modification

Check command

Create, Modify

Inherited from template

Empty check command

Table 18 Service Object Form




Non Supported Values


Create, Modify

User input



Create, Modify

User input or inherited

Group modification



User input

Service Name modification

Check command

Create, Modify

User input or inherited

Empty check command

Director deployment should not be pending for any object (i.e., template, group, command, zone) associated with host/service.

How to enable Smart Director

Carry out the following steps to enable the module.

  1. From CLI, execute the following command

    pcs resource unmanage icinga2-master


    This step is required only in cluster environments

  2. Enable the flag in the module configuration: From NetEye’s GUI, go to Configuration ‣ Modules ‣ smartdirector ‣ Configuration

  3. Run the neteye_secure_install script


    In cluster environments execute the neteye_secure_install only on the node where the icinga2-master resource is running

  4. From CLI, execute the following command

    pcs resource manage icinga2-master


    This step is required only in cluster environments

How to check object deployment status before instant deploy

A new info button is added after the Instant Deploy field on director host and service form page. When clicked, a popup will open that contains the differences between the object in memory and stored in database. By looking the differences the user will know the changes which is not yet deployed and still in memory.


Users need to have at least “General Module Access” and “director/inspect” permissions in the authorization configuration of the Director module, in order to see the differences between the objects they are creating/editing and the ones present in the Icinga 2 runtime.

Importing Monitoring Objects

Automatically importing hosts, users and groups of users can greatly speed up the process of setting up your monitoring environment if the resources are already defined in an external source such as an application with export capability (e.g., vSphere, LDAP) or an accessible, structured file (e.g., a CSV file). You can view the Icinga2 documentation on importing and synchronizing.

The following import capabilities (source types) are part of NetEye Core:

  • CoreApi: Import from the Icinga2 API

  • Sql: Import rows from a structured database

  • REST API: Import rows from a REST API at an external URL

  • LDAP: Import from directories like Active Directory

Two other import source types are optional modules that can be enabled or disabled from the Configuration > Modules page:

You can import objects such as hosts or users (for notifications) by selecting the appropriate field for import. For example in LDAP for the field “Object class” you can select “computer”, “user” or “user group”.

The import process retrieves information from the external data source, but by itself it will not permanently change existing objects in NetEye such as hosts or users. To do this, you must also invoke a separate Synchronization Rule to integrate the imported data into the existing monitoring configuration. This integration could either be adding an entirely new host, or just updating a field like the IP address.

For each synchronization rule you must decide how every property should map from the import source field to your field in Neteye (e.g., from dnshostname to host_name). You can also define different synchronization rules on the same import method so that you can synchronize different properties at different times.

To trigger either the import or synchronization tasks, you must press the corresponding button on their panels. Neteye also allows you to schedule background tasks (Jobs) for import and synchronization. You can thus create regular schedules for importing hosts from external sources, for instance importing VMs from vSphere every morning at 7:00AM, then synchronizing them with existing hosts at 7:30AM. As with immediate import and synchronization, you must define a separate job for each task.

To begin importing hosts into NetEye, select Director > Import data sources as in Fig. 59.

The Automation menu

Fig. 59 The Automation menu section within Director.

The Import Source

The “Import source” panel containing all defined Imports will appear. Click on the “Add” action to see the “Add import source” form (Figure 2). Enter a name that will be associated with this configuration in the Import Source panel, add a brief description, and choose one of the source types described above. The links above will take you to the expanded description for each source type.

Adding a new import configuration

Fig. 60 Adding a new import configuration for VMware/vSphere.

Once you have finished filling in the form, press the “Add” button to validate the new import source configuration. If successful, you should see the new import source added as a row to the “Import source” panel. If you click on the new entry, you will see the additional tabs and buttons in Fig. 61 with the following effects:

  • Check for changes: This button checks to see whether an import is necessary, i.e. whether anything new would be added.

  • Trigger Import Run: Make the importable data ready for synchronization.

  • Modify: This panel allows you to edit the original parameters of this import source.

  • Modifiers: Add or edit the property modifiers, described in the section below.

  • History: View the date and number of entries of previous import runs.

  • Preview: See a preview of the hosts or users that will be imported, along with the effects of any property modifiers on imported values.

Import source panels

Fig. 61 Import source panels

Figure 3: Additional tabbed panels and actions for the newly defined import source.

Property Modifiers

Properties are the named fields that should be fetched for each object (row) from the data source. One field (column) must be designated as the key indexing column (key column name) for that data source, and its values (e.g., host names) must be unique, as they are matched against each other during the synchronization process to determine whether an incoming object already exists in NetEye. For instance, if you are importing hosts, the key indexing column should contain fully qualified domain names. If these values are not unique, the import will fail.

From the form you can select among these options:

  • Property: The name of a field in the original import source that you want to modify.

  • Target Property: If you put a new field name here, the modified value will go into this new field, while the original value will remain in the original property field. Otherwise, the property field will be mapped to itself.

  • Description: A description that will appear in the property table below the form.

  • Modifier: The property modifier that will be used to change the values. Once you have created and applied property modifiers, the preview panel will show you several example rows so that you can check that your modifiers work as you intended. Fig. 62 shows an example modifier that sets the imported user accounts to not expire.

Property modifier panel

Fig. 62 A property modifier to set imported user accounts as having no expiration.

These modifiers can be selected in the Modifiers drop down box as in Fig. 63. Some of the more common modifiers are:


Source (Property)


Explanatio n/Example

Convert a latin1 string to utf8

guest.guest State


Change the text encoding

Get host by name (DNS lookup)



Find the IP address automatica lly




Convert upper case letters to lower

Regular expression based replacement




Bitmask match (numeric)

userAccount Control

is_ad_cont roller



For a description of Active Directory Bitmasks, please see this Microsoft documentation.

Available property modifiers

Fig. 63 Available property modifiers

Once you have created the new property modifier, it will appear under the “Property” list at the bottom of the Modifiers panel (see Fig. 62).


Here you can also order the property modifiers. Every modifier that can be applied to its property will be applied, so if you have multiple modifiers for a single property then be aware that they will be applied in the order shown in the list. For instance, if you add two regex rules, the second (lower) rule will be applied to the results of the first (higher).

Synchronization Rules

When rows of data are being imported, it is possible that the new objects created from those rows will overlap with existing objects already being monitored. In these cases, NetEye will make use of Synchronization Rules to determine what to do with each object. You can choose from among the following three synchronization strategies, known as the Update Policy:

  • Merge: Integrate the fields one by one, where non-empty fields being imported win out over existing fields.

  • Replace: Accept the new, imported object over the existing one.

  • Ignore: Keep the existing object and do not import the new one.

In addition, you can combine any of the above with the option to Purge existing objects of the same Object Type as you are importing if they cannot be found in the import source.

Each synchronization rule should state how every property should map from the import source field to your field in Neteye (e.g., dnshostname -> host_name).

To begin, go to Director > Synchronize from the main menu and press the green “Add” action in the Sync rule panel in Fig. 64.

Existing synchronization rules

Fig. 64 The Sync Rule panel showing existing synchronization rules.

Now enter the desired information as in Fig. 65, including the name for this sync rule that will distinguish it from others, a longer description, the Object type for the objects that will be synchronized, an Update Policy from the list above, whether to Purge existing objects, and a Filtering Expression. This expression allows you to restrict the imported objects that will be synchronized based on a logical condition. The official Icinga2 documentation lists all operators that can be used to create a filter expression.

Choosing the object type

Fig. 65 Choosing the Object Type for a synchronization rule.

Now press the “Add” action. You will be taken to the Modify panel of the synchronization rule, which will allow you to change any parameters should you wish to. You should also see an orange banner (Fig. 66) that reminds you to define at least one Sync Property before the synchronization rule will be usable.

Adding a new sync rule

Fig. 66 Adding a new sync rule

Figure 8: Successfully adding a new synchronization rule.

The color of the banner is related to the status icon in the “Sync rule” panel:



Black question mark

This Sync Rule has never been run before.

Orange cycling arrows

There are changes waiting since the last time you ran this rule.

Green check

This Sync Rule was correctly run at the given date.

Red “X”

This Sync Rule resulted in an error at the given date.

Synchronization Rule Properties

A Sync Property is a mapping from a field in the input source to a field of a NetEye object. Separating the mapping from the sync rule definition allows you to reuse mappings across multiple import types.

To add a sync property, click on the “Properties” tab (Fig. 67) and then on the “Add sync property rule” action. (Existing sync properties are shown in a table at the bottom of this panel, and you can edit or delete them by clicking on their row in the table.)

Adding a sync property

Fig. 67 Adding a first sync property.

Fig. 68 shows the first step, adding a Source Name, which is one of the Import sources you defined in Fig. 60. If you have multiple sources, then this drop down box will be divided automatically into those sources that have been used in a synchronization rule versus those that haven’t.

Setting the Import Source

Fig. 68 Setting the Import Source

Next, choose the destination field (Fig. 69), which corresponds to the field in NetEye where imported values will be stored. Destination fields are the pre-defined special properties or object properties of existing NetEye objects. Note that some destination field values like custom variables will require you to fill in additional fields in the form.

Setting the Destination Field

Fig. 69 Setting the Destination Field

If you cannot find the appropriate destination field to map to, consider creating a custom field in the relevant Host Template.

Finally, choose the source column (Fig. 70), which is the list of fields found in the input source.

Setting the Source Column

Fig. 70 Setting the Source Column


Remember that the key column name is used as the ID during the matching phase. The automatic sync rule does not allow you to directly add any custom expressions to it.

Once you have finished entering the sync properties for a synchronization rule, you can return to the “Sync rule” tab to begin the synchronization process. As in Fig. 71, this panel will give you details of the last time the synchronization rule was run, and allow you to both check whether a new synchronization will result in any changes, as well as to actually start the import by triggering the synchronization rule manually.

Preparing to trigger synchronization

Fig. 71 Preparing to trigger synchronization with our new rule.


Both Import Source and Sync Rules have buttons (Fig. 61) that will let you perform import and synchronization at any moment. In many cases, however, it is better to schedule regular importation, i.e., to automate the process. In this case you should create a Job that automatically runs both import and synchronization at set intervals.

The “Jobs” panel is available from Director > Jobs. Clicking on the “Add” action will take you to the “Add a new Job” panel (Fig. 72) Here you will see four types of jobs, only two of which relate to importation and synchronization:

  • Config: Generate and eventually deploy an Icinga2 configuration.

  • Housekeeping: Cleans up Director’s database.

  • Import: Create a regularly scheduled import run.

  • Sync: Create a regularly scheduled synchronization run.

Choosing the type of job

Fig. 72 Choosing the type of job

Select either the Import or Sync type. The following fields are common to both:

  • Disabled: Temporarily disable this job so you don’t have to delete it.

  • Run interval: The number of seconds until this job is run again.

  • Job name: The title of this job which will be displayed in the “Jobs” panel.

If you choose Import, you will see these additional fields:

  • Import source: The import to run, including the option to run all imports at once.

  • Run import: Whether to apply the results of the import (Yes), or just see the results (No).

If instead you choose Sync, you will see these other fields:

  • Synchronization rule: The sync rule to run, including the option to run all sync rules at once.

  • Apply changes: Whether to apply the results to your configuration (Yes), or just see the results (No).

Filling in the values for a sync job

Fig. 73 Filling in the values for a sync job

Once you press the green “Add” button, you will see the “Job” panel which will summarize the recent activity of that job, and the “Config” panel, which will let you change your job parameters.

LDAP/AD Import Source configuration

The LDAP/AD interface allows you to import hosts and users directly from a directory configured for the Lightweight Directory Access Protocol, such as Active Directory.

The documentation below assumes that you are already familiar with importing and synchronization in Director.

Before creating an LDAP import source, you will need to configure a Resource representing the connection to the LDAP server. Resources have multiple purposes:

A resource is created once for each external data source, and then reused for each functionality it has. Some resource types are:

  • Local database / file

  • LDAP

  • An SSH identity

In general, you will need to set up a resource for import when you need to know its access methods in order to connect to it. For LDAP, you will need the host, port, protocol, user name, password, and base DN. To create a resource for your LDAP server, go to Configuration > Application > Resources as shown in Fig. 74.

Adding LDAP resource

Fig. 74 Adding LDAP server as a resource.

Select the “Create a New Resource” action, which will display the “New Resource” panel. Enter the values for your organization (an example is shown in Fig. 75), then validate and save the configuration with the buttons below the form. Your new resource should now appear in the list at the left.

Configure connection details

Fig. 75 Configuring the vCenter connection details.

To create a new LDAP import source using the new resource, go to Director > Import data sources, click on the “Add” action, then enter a name and description for this import source. For “Source Type”, choose the “Ldap” option.

As soon as you’ve chosen the Source Type, the form will expand (Fig. 76), asking you for more details. Specify values for:

  • The object key (key column name)

  • The resource you created above

  • The DC and an optional Organizational Unit from where to fetch the objects

  • The type of object to create in NetEye (typically “computer”, “user” or “group”)

  • An LDAP filter where you can restrict the results, for instance: * To exclude non-computer types * To exclude disabled elements * With a RegEx to filter for specific DNS host names

  • A list of all LDAP fields to import in the “Properties” box, with each field name separated by a comma

Fig. 76 shows an example LDAP import configuration. Finally, press the “Add” button.

Configure the import details

Fig. 76 Configuring the LDAP import configuration details.

Your new import source should now appear in the list to the left, and you can now perform all of the actions associated with importation as described in the section on automation.

You will also need to define a Synchronization Rule for your new LDAP import source. This will allow you to create helpful property modifiers that can change the original fields in a regular way, for instance:

  • Resolve host names from IP addresses

  • Check if a computer is disabled

  • Standardize upper and lower case

  • Flag workstations or domain controllers

Import and Synchronization

Icinga Director offers very powerful mechanisms when it comes to fetching data from external data sources.

The following examples should give you a quick idea of what you might want to use this feature for. Please note that Import Data Sources are implemented as hooks in Director. This means that it is absolutely possible and probably very easy to create custom data sources for whatever kind of data you have. And you do not need to modify the Director source code for this, you can ship your very own importer in your very own Icinga Web 2 module. Let’s see an example with LDAP.

Import Servers from MS Active Directory

Create a new import source

Importing data from LDAP sources is pretty easy. We use MS Active Directory as an example source:

Import source

Fig. 77 Import source

You must formerly have configured a corresponding LDAP resource in your Icinga Web. Then you choose your preferred object class, you might add custom filters, a search base should always be set.

The only tricky part here are the chosen Properties. You must know them and you are required to fill them in, no way around this right now. Also please choose one column as your key column.

In case you want to avoid trouble please make this the column that corresponds to your desired object name for the objects you are going to import. Rows duplicating this property will be considered erroneous, the Import would fail.

Property modifiers

Data sources like SQL databases provide very powerful modifiers themselves. With a handcrafted query you can solve lots of data conversion problems. Sometimes this is not possible, and some sources (like LDAP) do not even have such features.

This is where property modifiers jump in to the rescue. Your computer names are uppercase and you hate this? Use the lowercase modifier:

Lowercase modifier

Fig. 78 Lowercase modifier

You want to have the object SID as a custom variable, but the data is stored binary in your AD? There is a dedicated modifier:

SID modifier

Fig. 79 SID modifier

You do not agree with the way Microsoft represents its version numbers? Regular expressions are able to fix everything:

Regular expression modifier

Fig. 80 Regular expression modifier


The Import itself just fetches raw data, it does not yet try to modify any of your Icinga objects. That’s what the Sync rules have been designed for. This distinction has a lot of advantages when it goes to automatic scheduling for various import and sync jobs.

When creating a Synchronization rule, you must decide which Icinga objects you want to work with. You could decide to use the same import source in various rules with different filters and properties.

Synchronization rule

Fig. 81 Synchronization rule

For every property you must decide whether and how it should be synchronized. You can also define custom expressions, combine multiple source fields, set custom properties based on custom conditions and so on.

Synchronization properties

Fig. 82 Synchronization properties

Now you are all done and ready to a) launch the Import and b) trigger your synchronization run.

Use Text Files as an Import Source

The FileShipper interface allows you to import objects like hosts, users and groups from plain-text file formats like CSV and JSON.

The documentation below assumes that you are already familiar with Importing and Synchronization in Director. Before using FileShipper, please be sure that the module is ready by:

  • Enabling it in Configuration > Modules > fileshipper.

  • Creating paths for both the configuration and the files:

    $ mkdir /neteye/shared/icingaweb2/conf/modules/fileshipper/
    $ mkdir /data/file-import

    And then defining a source path for those files within the following configuration file:

    $ cat > /neteye/shared/icingaweb2/conf/modules/fileshipper/imports.ini
    [NetEye File import]
    basedir = "/data/file-import"
Adding a new Import Source

From Director > Import data sources, click on the “Add” action, then enter a name and description for this import source. For “Source Type”, choose the “Import from files (fileshipper)” option as in Fig. 83. The form will then expand to include several additional options.

Add a Fileshipper Import Source

Fig. 83 Add a Fileshipper Import Source

Choose a File Format

Next, enter the name of the principal index column from the file, and choose your desired file type from File Format as in Fig. 84.

Choose a File Format

Fig. 84 Choosing the File Format.

If you would like to learn more about the supported file formats, please read the file format documentation.

Select the Directory and File(s)

You will now be asked to choose a Base Directory (Fig. 85).

Choose a Base Directory

Fig. 85 Choosing the Base Directory.

The FileShipper module doesn’t allow you to freely choose any file on your system. You must provide a safe set of base directories in Fileshipper’s configuration directory as described in the first section above. You can include additional directories if you wish by creating each directory, and then modifying the configuration file, for instance:

[NetEye CSV File Import]
basedir = "/data/file-import/csv"

[NetEye XSLX File Import]
basedir = "/data/file-import/xslx"

Now you are ready to choose a specific file (Fig. 86).

Choose a specific file

Fig. 86 Choosing a specific file or files.


For some use-cases it might also be quite useful to import all files in a given directory at once.

Once you have selected the file(s), press the “Add” button. You will then see two additional parameters to fill for the CSV files: the delimiter character and field enclosure character (Fig. 87). After filling them out, you will need to press the “Add” button a second time.

Add extra parameters

Fig. 87 Add extra parameters.

The new synchronization rule will now appear in the list (Fig. 88). Since you have not used it yet, it will be prefixed by a black question mark.

The newly added import source

Fig. 88 The newly added import source.

Now follow the steps for importing at the page on Importing and Synchronization in Director. Once complete, you can then look at the Preview panel of the Import Source to check that the CSV formatting was correctly recognized. For instance, given this CSV file:

dnshostname,displayname,OS,NE4 North Building 1,Windows,NE4 North Building 2,Linux

then Fig. 89 shows the following preview:

CSV preview

Fig. 89 Previewing the results of CSV import.

If the preview is correct, then you can proceed to Synchronization, or set up a Job to synchronize on a regular basis.

Supported File Formats

Depending on the installed libraries the Import Source currently supports multiple file formats.

CSV (Comma Separated Value)

CSV is a not so well defined data format, therefore the Import Source has to make some assumptions and ask for optional settings.

Basically, the rules to follow are:

  • a header line is required

  • each row has to have as many columns as the header line

  • defining a value enclosure is mandatory, but you do not have to use it in your CSV files. So while your import source might be asking for "hostname";"ip", it would also accept hostname;ip in your source files

  • a field delimiter is required, this is mostly comma (,) or semicolon (;). You could also opt for other separators to fit your very custom file format containing tabular data

Sample CSV files

Simple Example


More complex but perfectly valid CSV sample

"hostname","ip address","location"
"csv3","","Nott"", at Home"
JSON - JavaScript Object Notation

JSON is a pretty simple standardized format with good support among most scripting and programming languages. Nothing special to say here, as it is easy to validate.

Sample JSON files

Simple JSON example

This example shows an array of objects:

[{"host": "json1", "address": ""},{"host": "json2", "address": ""}]

This is the easiest machine-readable form of a JSON import file.

Pretty-formatted extended JSON example

Single-line JSON files are not very human-friendly, so you’ll often meet pretty- printed JSON. Such files also make perfectly valid import candidates:

  "": {
    "host": "",
    "address": "",
    "location": "HQ",
    "groups": [ "Linux Servers" ]
  "": {
    "host": "",
    "address": "",
    "location": "HQ",
    "groups": [ "Windows Servers", "Lab" ]
Microsoft Excel

XSLX, the Microsoft Excel 2007+ format is supported since v1.1.0.

XML - Extensible Markup Language

When working with XML please try to ship simple files as shown in the following example.

Sample XML file

<?xml version="1.0" encoding="UTF-8" ?>
YAML (Ain’t Markup Language)

YAML is anything but simple and well defined, however it allows you to write the same data in various ways. This format is useful if you already have files in this format, but it’s not recommended for future use.

Sample YAML files

Simple YAML example

- host: ""
  address: ""
  location: "HQ"
- host: ""
  address: ""
  location: "HQ"
- host: ""
  address: ""
  location: "HQ"

Advanced YAML example

Here’s an example using Puppet for database configuration. as an example, but this might work in a similar way for many other tools.

Instead of a single YAML file, you may need to deal with a directory full of files. The Import Source documentation shows you how to configure multiple files. Here you can see a part of one such file:

--- !ruby/object:Puppet::Node::Facts
  name: foreman.localdomain
    architecture: x86_64
    timezone: CEST
    kernel: Linux
    system_uptime: "{\x22seconds\x22=>5415, \x22hours\x22=>1, \x22days\x22=>0, \x22uptime\x22=>\x221:30 hours\x22}"
    domain: localdomain
    virtual: kvm
    is_virtual: "true"
    hardwaremodel: x86_64
    operatingsystem: CentOS
    facterversion: "2.4.6"
    filesystems: xfs
    fqdn: foreman.localdomain
    hardwareisa: x86_64
    hostname: foreman

vSphere Import Source configuration

The vSphere interface allows you to import hosts directly from a vCenter server.

The documentation below assumes that you are already familiar with importing and synchronization in Director. Before using the vSphere import interface, please be sure that the module has been enabled by following the sequence Configuration > Modules > vsphere.

To create a new VMware vSphere import source, go to Director > Import data sources, click on the “Add” action, then enter a name and description for this import source. For “Source Type”, choose the “VMware vSphere” option as in Choosing the VMware vSphere option..

Choose Import Source name

Fig. 90 Choosing the VMware vSphere option.

As soon as you’ve chosen the correct Source Type, it will ask you for more details. Please try to avoid creating connections to every single ESX host and instead fetch your data from your vCenter as shown in Configuring the vCenter connection details..

Configure connection details

Fig. 91 Configuring the vCenter connection details.

That’s it. Once you’ve confirmed that you want to add this new Import Source, you’re all done with the configuration.


As described on :ref :the importing page <property-modifiers>, the key column name is used as the ID during the matching phase. The vSphere import rule automatically proposes name as this field. Whichever index you use, you cannot directly modify it with any custom expressions.

You can now click on the Preview tab to see what the results look like (Previewing the results of importing from the source) before deciding whether to run the full import.

Previewing the results of importing from the source

Fig. 92 Previewing the results of importing from the source

Be sure to define a Synchronization Rule for your new import source, as explained in the related Director documentation.

If you prefer to use the Icinga2 CLI commands instead of the web interface, see this reference documentation.

Configuration Baskets

Director already takes care of importing configurations for monitored objects. This same concept is also useful for Director’s internal configuration. Configuration Baskets allow you to export, import, share and restore all or parts of your Icinga Director configuration, as many times as you like.

Configuration baskets can save or restore the configurations for almost all internal Director objects, such as host groups, host templates, service sets, commands, notifications, sync rules, and much more. Because configuration baskets are supported directly in Director, all customizations included in your Director configuration are imported and exported properly. Each snapshot is a persistent, serialized (JSON) representation of all involved objects at that moment in time.

Configuration baskets allow you to:

  • Back up (take a snapshot) and restore a Director configuration…

    • To be able to restore in case of a misconfiguration you have deployed

    • Copy specific objects as a static JSON file to migrate them from testing to production

  • Understand problems stemming from your changes with a diff between two configurations

  • Share configurations with others, either your entire environment or just specific parts such as commands

  • Choose only some elements to snapshot (using a custom selection) in a given category such as a subset of Host Templates

In addition, you can script some changes with the following command:

# icingacli director basket [options]

Using Configuration Baskets

To create or use a configuration basket, select Icinga Director > Configuration Baskets. At the top of the new panel are options to:

  • Make a completely new configuration basket with the Create action

  • Make a new basket by importing a previously saved JSON file with the Upload action

At the bottom you will find the list of existing baskets and the number of snapshots in each. Selecting a basket will take you to the tabs for editing baskets and for taking snapshots.

Create a New Configuration Basket

To create or edit a configuration basket, give it a name, and then select whether each of the configuration elements should appear in snapshots for that basket. The following choices are available for each element type:

  • Ignore: Do not put this element in snapshots (for instance, do not include sync rules).

  • All of them: Put all items of this element type in snapshots (for example, all host templates).

  • Custom Selection: Put only specified items of this element type in a snapshot. You will have to manually mark each element on the element itself. For instance, if you have marked host templates for custom selection, then you will have to go to each of the desired host templates and select the action Add to Basket. This will cause those particular host templates to be included in the next snapshot.

Uploading and Editing Saved Baskets

If you or someone else has created a serialized JSON snapshot (see below), you can upload that basket from disk. Select the Upload action, give it a new name, use the file chooser to select the JSON file, and click on the Upload button. The new basket will appear in the list of configuration baskets.

Editing a basket is simple: Click on its name in the list of configuration baskets to edit either the basket name or else whether and how each configuration type will appear in snapshots.

Managing Snapshots

From the Snapshots panel you can create a new snapshot by clicking on the Create Snapshot button. The new snapshot should immediately appear in the table below, along with a short summary of the included types (e.g., 2x HostTemplate) and the time. If no configuration types were selected for inclusion, the summary for that row will only show a dash instead of types.

Clicking on a row summary will take you to the Snapshot panel for that snapshot, with the actions

  • Show Basket: Edit the basket that the snapshot was created from

  • Restore: Requests the target Director database; clicking on the Restore button will begin the process of restoring from the snapshot. Configuration types that are not in the snapshot will not be replaced.

  • Download: Saves the snapshot as a local JSON file.

followed by its creation date, checksum, and a list of all configured types (or custom selections).

For each item in that list, the keywords unchanged or new will appear to the right. Clicking on new will show the differences between the version in the snapshot and the current configuration.


NetEye can be set up to send SMS notification to allow DevOps and Network Managers to be informed immediately about problems in the monitored infrastructure and promptly take adequate actions.

It is not always possible to connect an SMS Gateway between NetEye and a physical NetEye appliance like an SMS modem. For instance, you may be in a situation where:

  • NetEye is operated on virtual infrastructure (e.g., the Cloud or with VMware)

  • The mobile network signal is weak, forcing the SMS Gateway to be located at a distance that exceeds the maximum length for a serial cable

A dedicated SMS LAN Gateway (i.e., a serial-to-ethernet adaptor) that can solve these problems is available for use with NetEye. These devices are sourced from Moxa and are tested for compatibility with the notification strategy of NetEye.

If you are interested in the SMS LAN Gateway, please contact the NetEye support.

How It Works

With the Moxa device in TCP Server Mode, the host computer (NetEye) initiates contact with the NPort 6150, establishes the connection, and receives data from the serial device. This operational mode also supports up to 8 simultaneous bidirectional TCP/IP connections, enabling multiple hosts to collect data from the same serial device at the same time.

The Moxa NPort 6150 supports SSL and SSH, encrypting data before sending it over the network. It has port buffers for storing serial data when the Ethernet connection is down, and the serial connection supports RS-232, RS-422 and RS-485.

NPort 6150 Hardware Setup

We recommend that you assign a static IP for the Moxa NPort device before connecting it.

You should first set up both hardware devices before proceeding to configure the software. The basic steps are:

  1. Set up the Moxa NPort device and power it on

  2. Check that the Ready and Link LEDs are green

  3. Connect the modem to the Moxa NPort device with an RS-232 serial cable, attach an antenna, insert the SIM, and power it on

  4. Check that the modem’s LED changes from red to green

  5. Connect the Moxa NPort device to the network with an Ethernet cable

Moxa Device Configuration

The Moxa NPort series has both a built-in telnet server and a built-in web server for configuration. You can use either method as they have the same functionality. You should change the IP address, netmask and gateway according to your needs. Other changes are not necessary as we connect our Wavecom GSM Gateway to the Serial interface on the adaptor and the default settings work without problems.

The device is configured at the factory with the following default profile:

IP address:
Username:    admin
Password:    <blank or "moxa">

Once connected, you will be asked to change your password (press ESC to skip this). To change the default IP address, go to the Network tab and then the Basic tab. Change the field for the second line marked “IPv4 address”, and change the netmask and gateway if necessary. To change the serial interface type from the default RS-232, go to Port > Line > Interface.

Configuration for NetEye

You will need to use the neteye-extras repository to install the modules without updating the kernel itself:

yum install moxa-npreal2 kmod-npreal2 --enablerepo=neteye-extras

If you connect the Moxa NPort device to a remote switch on the network, you should skip this next step. However, if you connect your Moxa device directly to a physical NetEye server on a dedicated ethernet port, you must configure the dedicated network interface on NetEye with the desired network configuration, for instance:

[root@neteye ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

Next, configure the IP address of the NPort 6150 for the Linux driver (this step is necessary since the Moxa NPort device is connected to NetEye via Ethernet). After configuring npreals, you will need to return here to insert the number appended to the end of the IP address (see below for which value to use):

root@neteye ~]# cat /etc/sysconfig/npreals
# Configure devices in this mode:

Start and Test the Driver Daemon

Set up the npreals system service:

# systemctl enable npreals.service
# systemctl start npreals

Next, you will need to manually install the UUCP package, which is used to communicate with the modem:

# yum install

Now get the device’s tty name from the npreals configuration. In this example, it is ttyr00:

$ cat /usr/lib/npreal2/driver/
#   This configuration file is created by Moxa NPort      #
#   Administrator Program automatically, please do not    #
#   modify this file by yourself.                         #
#[Minor] [ServerIP] [data]  [cmd]   [FIFO]  [SSL]   [ttyName] [coutName] [interface][mode][BackIP]
0 950 966 1   0   ttyr00  cur00   0   0   (null)

You MUST use this tty name in the modem’s smsd.conf file under the section [GSM1] > device (here ttyr00 instead of ttyS0 as described in the Modem Setup section). In addition, check that the number in the [FIFO] field is the number after the colon in the file /etc/sysconfig/npreals as shown further above.

Next, test your access to the Moxa NPort device with the device’s tty name from the prior step using the cu terminal program:

# ping
# cu -s 115200 -l /dev/ttyr00

If the connection was successful (i.e., you see the “Connected.” response), you can now proceed to configure the GSM modem interface and test sending SMS messages as described in the setup. Note that to end a cu session you will need to enter the command ~. since the terminal program will block all control characters that typically end a session.

If instead you see a message such as:

cu: /dev/ttyr00: Line in use

then double check your configuration in smsd.conf (especially the tty device) and the permissions as described below.

Final Notes

To ensure that NetEye can send SMS messages via the Moxa NPort device, you should change both the owner and group owner of the spool/outgoing/ folder from root to icinga:

# chown icinga:icinga /neteye/local/smsd/data/spool/outgoing/

Otherwise, NetEye cannot create the temporary files needed for sending SMS’s.

If you want to use smssend, remember that that script uses a different embedded default path which you will need to change:

# /usr/bin/smssend

SMS Modem Setup

NetEye uses SMS Server Tools 3 (smstools) for handling the GSM modem interface. More detailed configuration documentation can be found in the smstools official documentation).

The configuration file is located at /neteye/local/smsd/conf/smsd.conf.

Here is a sample smsd.conf file:

# Example smsd.conf.

devices = GSM1
#logfile = 1
logfile = /neteye/local/smsd/log/smstools.log
# loglevel [1-7]:
# A higher number indicates increased verbosity.
loglevel = 7

failed = /neteye/local/smsd/data/spool/failed
incoming = /neteye/local/smsd/data/spool/incoming
checked = /neteye/local/smsd/data/spool/checked
outgoing = /neteye/local/smsd/data/spool/outgoing

# Use the modem's tty, or the tty of a bridging device if you are using one
device = /dev/ttyS0
# For older WaveCom modem devices, use this baudrate:
#baudrate = 9600
# For newer WaveCom and Sierra devices, use this instead:
baudrate = 115200
# For the new Sierra FX30(S) modem, uncomment this line:
#rtscts = no
mode = new
incoming = yes
cs_convert = yes
# Uncomment this line if your SIM has a pin (we recommend leaving the SIM PIN blank):
#pin = 1111
eventhandler = /usr/share/neteye/eventhandler/bin/


For Sierra FX30 and FX30S models, remember to uncomment the parameter rtscts = no above. Also, if you are using the Moxa NPort 6150 to extend your modem’s range, be sure to insert the Moxa tty device in place of ttyS0.

After changing the configuration, you will need to restart the SMS daemon as follows:

[root@neteye ~]# systemctl restart smsd

Testing SMS Notifications

The phone number should include the country code and contain only numbers. So for instance a phone number in Italy might be 00391234567890.

There are two methods for testing that SMS messages are correctly sent:

  1. Send an SMS message directly from the command line with the smssend script: # /usr/bin/smssend 00391234567890 "TEST FROM NETEYE"

  2. Interact directly with the smsd daemon. To do this, create a file with content like the following in /neteye/local/smsd/data/spool/outgoing/ (the actual name of the file doesn’t matter). The smsd daemon will send the SMS without further intervention:

    To: 00391234567890

To check the status, you can look directly in the directories under /neteye/local/smsd/data/spool/ or check the log file, for instance with:

# tail -f /neteye/local/smsd/log/smstools.log

SMS Notification Setup

After you have properly setup a SMS modem you should configure notification. To achieve this, you need to create suitable users and notifications. The whole process requires to create new data field and commands, then notification templates and user templates to define users, and finally to use them within a rule.


In case of a cluster installation you need an SMS modem connected to each node. It is not possible to use a single Moxa configured on two different nodes.

Data Field Creation

Create a data field with following parameters:

  • Field name = “user_sms”

  • Caption = “User SMS”

It will be necessary when defining new user templates, to allow them to access the SMS functionality.

Commands Creation

Now you have to create two commands, one for the hosts and one for the services, necessary to activate SMS notification for hosts and services respectively.

The command for the hosts needs the following parameters:

  • Command type = “Notification Plugin Command”

  • Command name = “sms-host-notification”

  • Command = “/usr/local/bin/”

  • Timeout = 60

  • Disabled = “no”

































  • Field = “user_sms”

  • Mandatory = “Mandatory”

Create a command for the services with the following parameters:

  • Command type = “Notification Plugin Command”

  • Command name = “sms-service-notification”

  • Command = “/usr/local/bin/”

  • Timeout = 60

  • Disabled = “no”





































  • Field = “user_sms”

  • Mandatory = “Mandatory”

Notification Template Creation

Now, by using the commands defined in the previous section, set up two new notification templates as follows:

Host notification template:

  • Notification Template = “sms_host_notification”

  • Notification command = “sms-host-notification”

  • States = Down, Up

  • Transition types = Acknowledgement, Custom, DowntimeEnd, DowntimeRemoved, DowntimeStart, FlappingEnd, FlappingStart, Problem, Recovery

Service notification template:

  • Notification Template = “sms_service_notification”

  • Notification command = “sms-service-notification”

  • States = Critical, OK, Unknown, Warning

  • Transition types = Acknowledgement, Custom, DowntimeEnd, DowntimeRemoved, DowntimeStart, FlappingEnd, FlappingStart, Problem, Recovery

Notifications - Users - User Templates Creation

We can now create a user template with following parameters:

  • User template name = “notify_allEvents”

  • Send notifications = “yes”

  • Custom properties = “User SMS”


  • Field = “user_sms”

  • Mandatory = “Optional”

User configuration Creation

To create a user that is allowed to send SMS, import the template notify_allEvents and specify User SMS as Custom property.

Notification Apply Rule Creation

As last step, create an apply rule with following parameters:

  • Imports = “sms_host_notification”

  • Users = “your_user“, i.e., the user created in the previous section.

  • Apply to = “Hosts”

  • Assign where “” = “*”

Business Monitoring

To view or configure Business Processes in NetEye 4, click on “Business Processes” in the left navigation bar. If you haven’t yet configured any Business Processes, you will see a new Dashboard like the one in Fig. 93 with two options: “Create” and “Upload”.


Fig. 93 The empty Business Process Overview dashboard.

Configure a new Business Process

To create a new Business Process configuration, click on the “Create” button. You should see a new dashboard panel (Fig. 94) that appears to the right of the Business Process Overview dashboard.


Fig. 94 The new Business Process creation panel.

You will need to fill in the following fields to create the new Business Process:

  • Configuration name: The Business Process definition will be stored with this name. This is going to be used when referencing this process in URLs and in Check Commands.

  • Title: You might optionally want to provide an additional title. In that case the title is shown in the GUI, while the name is still used as a reference. The title will default to the name.

  • Description: Provide a short description explaining within 100-150 character what this configuration provides. This will be shown on the Dashboard.

  • Backend: Icinga Web 2 currently uses only one Monitoring Backend, but in theory you could configure multiple ones. They won’t be usable in a meaningful way at the time of this writing. Still, you might want to use a different backend as a data provider for your Business Process.


    Usually this should not be changed.

  • State Type: You can decide whether SOFT or HARD states should be used as a basis for calculating the state of a Business Process definition.

  • Add to menu: Business Process configurations can be linked to the Icinga Web 2 left navigation bar. Only the first five configurations a user is allowed to see will be shown there.

Fig. 95 shows an example Business Process with the above values filled in.


Fig. 95 The new Business Process creation panel with example values.

Store the Business Process configuration

Once you are done, click Add to store your new (still empty) Business Process configuration. The panel will show an empty “Add” tile as in Fig. 96, ready for you to add new Business Processes.


Fig. 96 The new Business Process creation panel.

From here you can now add as many deeply nested Business Processes as you want.

Create your first Business Process Node

A Business Process Node consists of a name, title, an operator and one or more child nodes. Each node can be a Root Node, a child node of other Business Process Nodes, or both. When you first create a Business Process, there will be no nodes, so you will be prompted to create one as in Fig. 97.


Fig. 97 Ready to add the first Business Process node.

Configuring Your First Node

To create our first Business Process Node, click on the Add button. You will then see the node configuration form (an asterisk indicates a required field):


Fig. 98 The Business Process node configuration form.

Configuring a Business Process requires filling in the following fields:

  • Name: The first setting is the Node name, an identifier that must be unique across all nodes that will be defined. This identifier will be used within every URL and also within any Check Commands referring to this node by an Icinga Service Check.

  • Display Name: (Optional) As uniqueness sometimes leads to not-so-beautiful names, you can also specify a display title which will be used across all dashboards:

  • Operator: A Business Process’ operator defines its behavior, specifying how its own state will be calculated. It can be one of AND, OR, NOT, DEG or MIN.

  • Visualization: This describes when the node should be displayed. Since this node will be a root node, the form automatically suggests that you create a Toplevel Process. Alternatively, you could also create a Subprocess. As we are currently not adding this node to another node, that choice would lead to an Unbound Node that could be linked to later on.

  • Info URL: (Optional) You might also want to provide a link to additional information related to a specific process. This could be instructions with more technical details, or hints explaining what should happen if an outage occurs. You might not want to do so for every single node, but it might come in handy for your most important (e.g., top level or high priority) nodes.

Completed Business Process Node

That’s it, you are ready to submit the form. After clicking the green “Submit” button, you should see your first completed Business Process Node. A red notification as in Fig. 99 reminds you that your pending changes have not been stored yet.


Fig. 99 The completed business process node, “My Root BP Node”.

You can now Store the Configuration as is, or move on to add additional nodes to continue your configuration. In the latter case, you will also need to select a Node type. This can be either:

  • Existing Process (Node): When you want to link this node to another node you have already created. Choose the appropriate existing node from the drop down list, then click the green “Submit” button. (Be sure not to link this node to itself.)

  • New Process Node: Here you can create a new, independent root-level node.

  • Host: When you have created the lowest level of intermediate nodes in your business process hierarchy, you can add the hosts that you would like to monitor as part of that business process.

  • Service: For each host in your business process hierarchy, you can specify particular services on the host that are part of the business process, and thus exclude other, irrelevant services on that host.


While you can create Business Process nodes whenever you like, you cannot add Host and Service nodes until you have created the underlying hosts and services themselves. Thus you should complete that step before setting up your business process hierarchy.

To edit a node you have already created, click on the “Unlock” button and then choose the “Notepad” icon on the node you wish to change.


The grey arrows below the Business Process title are breadcrumbs that show your current position. Their use is described in more detail in the official Business Process documentation.

Importing Processes

To avoid redundancy and make complex Business Process Configurations easier to maintain, it is possible to import existing business processes from other current configurations by completing the following tasks:

  1. Create node. In order to be able to import a process, you first need to create a root node. You cannot import processes directly into the root level.

    Subprocesses Only

    Fig. 100 Subprocesses Only

  2. Import a Process. Once the related configuration form is open, choose Existing Process and wait for the form to refresh.

    Existing Process

    Fig. 101 Existing Process

  3. Choose Configuration. You can now choose the configuration to import processes from. Or simply hit Next to just utilize a process from the current configuration.

    Choose Configuration

    Fig. 102 Choose Configuration

  4. Select Processes. Next, select the processes you want to import and submit the form.

    Select Processes

    Fig. 103 Select Processes

  5. Import Successful. You can now see the resulting imported process. Don’t forget to save your changes!

    Import Successful

    Fig. 104 Import Successful


Every Business Process requires an Operator. This operator defines its behaviour and specifies how its very own state is going to be calculated.


The AND operator selects the WORST state of its child nodes.


The OR operator selects the BEST state of its child nodes.


The MIN operator selects the WORST state out of the BEST n child node states.

Advanced Topics

In this section we deal with some advanced topic, including passive monitoring, jobs, API, and Icinga Retention Policy.

Passive monitoring

When Tornado is installed, NetEye creates an Icinga Web 2 user neteye-tornado and an associated Icinga Web 2 role neteye_tornado_director_apis, which only gives access to the module Director, with limited authorizations on the actions that Tornado can perform.


These user and permission are required by the backend, for Tornado to call the Director API–and in particular for the authentication and authorization of the Tornado Director Executor to the APIs of the Icinga Director. Therefore neither the user, nor the associated role must be removed from the system.

In case you need it, for example to reconfigure the Tornado Director Executor, the password for the user neteye-tornado is stored in the file /root/.pwd_icingaweb2_neteye_tornado.

Processing Tree

Within the NetEye interface you can view the Tornado rule configuration graphically instead of in a command shell. The Configuration Viewer (available when click on Tornado in the left side menu) shows the processing tree in a top-down format, allowing you to verify that the structure of the rules you have created is as you intended.


While a more complete description of all Tornado elements is available in the official documentation, they are summarized in enough detail here so that you can understanding this page without reading the full official documentation.

  • Filter: A node in the processing tree that contains (1) a filter definition and (2) a set of child nodes, each of which corresponds to a condition in the filter definition. You can use a Filter to create independent pipelines for different types of events, reducing the amount of time needed to process them.

  • Implicit Filter: If a filter is not present in a node, a default filter is created which forwards an event to ALL child nodes, rather than a particular one that matched a filter condition.

  • Rule Set: A leaf node that contains multiple rules to be matched against the event one by one once the filters in the parent nodes have let an event through.

  • Rule: A single rule within a rule set, which can be matched against an event.

  • Processing Tree: The entire set of filters and rules that creates the hierarchical structure where events are filtered and then matched against one or more rules.

Basic Configuration Information

The location of configuration files in the file system is pre-configured in NetEye. NetEye automatically starts Tornado as follows:

  • Reads the configuration from the /neteye/shared/tornado/conf/ directory

  • Starts the Tornado Engine

  • Searches for Filter and Rule definitions in /neteye/shared/tornado/conf/rules.d/

The structure of this last directory reflects the Processing Tree structure. Each subdirectory can contain either:

  • A Filter and a set of sub directories corresponding to the Filter’s children

  • A Rule Set

Each individual Rule or Filter to be included in the processing tree must be in its own file, in JSON format (Tornado will ignore all other file types). For instance, consider this directory structure:

                 |- node_0
                 |    |- 0001_rule_one.json
                 |    \- 0010_rule_two.json
                 |- node_1
                 |    |- inner_node
                 |    |    \- 0001_rule_one.json
                 |    \- filter_two.json
                 \- filter_one.json

In this example, the processing tree is organized as follows:

  • The root node is a filter named “filter_one”.

  • The filter filter_one* has two child nodes: node_0 and node_1.

    • node_0 is a rule set that contains two rules called rule_one and rule_two, with an implicit filter that forwards all incoming events to both of its child rules.

    • node_1 is a filter with a single child named “inner_node”. Its filter filter_two determines which incoming events are passed to its child node.

  • inner_node is a rule set with a single rule called rule_one.

Within a rule set, the alphanumeric order of the file names determines the execution order. The rule filename is composed of two parts separated by the first ‘*’ (underscore) symbol. The first part determines the rule execution order, and the second is the rule name. For example:

  • 0001_rule_one.json -> 0001 determines the execution order, “rule_one” is the rule name

  • 0010_rule_two.json -> 0010 determines the execution order, “rule_two” is the rule name

Rule names must be unique within their own rule set. There are no constraints on rule names in different rule sets.

Similar to what happens for Rules, Filter names are also derived from the filenames. However, in this case, the entire filename corresponds to the Filter name.

In the example above, the “filter_one” node is the entry point of the processing tree. When an Event arrives, the Matcher will evaluate whether it matches the filter condition, and will pass the Event to one (or more) of the filter’s children. Otherwise it will ignore it.

A node’s children are processed independently. Thus node_0 and node_1 will be processed in isolation and each of them will be unaware of the existence and outcome of the other. This process logic is applied recursively to every node.

Structure of a Filter

A Filter contains these properties, defined in its JSON file:

  • description: A string value providing a high-level description of the filter.

  • active: A boolean value; if false, the filter’s children will be ignored.

  • filter: An operator that, when applied to an event, returns true or false. The result determines how an Event will be processed by the filter’s inner nodes.

When the configuration is read from the file system, the filter name is automatically inferred from the filename by removing its ‘.json’ extension. It can be composed only of letters, numbers and the “_” (underscore) character.

Structure of a Rule Set

A Rule Set is simply a list of rules.

Interface Overview

The Graphical User Interface allows you to explore the current configuration of your Tornado Instance. The GUI has two main views. The first one is the Processing Tree View; this one allows you to examine the entire configuration processing tree of Tornado and to modify the configuration–please refer to the next section for more information. The second view, the Ruleset View, can be reached by clicking on whatever node is displayed in the Processing Tree View; from here, you can access the details of each Rule, send Events to Tornado, inspect the outcome of the Event execution, and modify the rules. In this view the information is organized into a table where each row represents a specific Rule.

In the Ruleset View, an Event Test panel is available to send simulated Events. These Events, can be created through a dedicated form and are composed by the following four fields:

  • event type: the type of the Event, such as trap, sms, email, etc.

  • creation time*: the Event timestamp defined as an epoch in milliseconds

  • enable execution of actions: whether the actions of matching rules have to be executed or skipped

  • payload: the event payload in JSON format

When a test is executed by clicking the “Run Test” button, the linked Event is sent to Tornado and the outcome of the operation will be reported in the rule table.

At this point, a rule can be in one of the following states:

  • matched: If a rule matched the Event

  • stopped: If a rule matched the Event and then stopped the execution flow. This happens if the continue flag of the rule is set to false

  • partially matched: If the where condition of the Rule was matched but it was not possible to process the required extracted variables

  • not matched: If the Rule did not match the Event

For each rule in the table, the extracted variables and the generated Action payloads are shown. In addition, all these extracted variables are also shown in the Event Test form.

Two other buttons are visible, one for cleaning all the fields of the form and one for cleaning the outcome of the test.

Tornado Processing Tree Editor

The Tornado GUI provides an edit mode that allows to modify the configuration of the Tornado rules’ processing tree directly from NetEye’s front-end. Rules must be written in JSON and the editor features a validator, that helps you checking that the rule is syntactically correct. If you are not acquainted with JSON, you can check one of the following tutorials:

Two important principles have been used for the development of the edit mode and must be understood and taken into account when modifying Tornado’s configuration, especially because these differ from other modules in Icinga Director:

  • Implicit Lock Mode. Only one user at a time can modify the processing tree configuration. This prevents multiple users from changing the configuration simultaneously, which might lead to unwanted results and possibly to Tornado not working correctly due to incomplete or wrong configuration. When a user is editing the configuration, the actual, running configuration is left untouched: it continues to be operative and accepts incoming data to be processed.

  • Edit Mode. When starting to modify the configuration, Tornado will continue to work with the existing configuration–thanks to the implicit lock mode, while the new changes are saved in a separate draft configuration. The new configuration then must be deployed to become operative.


If the user logs out without deploying the draft, the next user that will log in–and start modifying tornado’s configuration–will have different possibilities: to check all the changes present in the draft by clicking on the Show Changes button and to continue editing and then deploying the draft, or to discard completely the existent draft and star editing a new one. Once the user deployed the changes, the action will be recorded in the Auditlog module (System > Auditlog). Here a detailed diff of what has been changed, together with the user who deployed those changes and the timestamp is displayed.

This mode has other positive side effects: one does not need to complete the changes in one session, but can stop and then continue at a later point; another user can pick up the draft and complete it; in case of a disaster (like e.g., the abrupt end of the HTTPS connection to the GUI) it is possible to resume the draft from the point where it was left.


Only one draft at a time is allowed; that is, editing of multiple draft is not supported!

When a user enters the edit mode, a new draft is created on the fly if none is present, which will be an exact copy of the running Tornado configuration. If not present in the draft, a root node of type Filter will be automatically added to the draft.

To check for the correctness of a Draft, without impacting the deployed configuration, it is possible to open the test window also while in Edit Mode. The event will be processed using the Draft and the result will be displayed, while keeping the existing configuration running.

In order to modify the processing tree, you can add nodes to each level of the processing tree, as well as inspecting and modifying each single rule. Moreover, from the main view it is also possible to add new filters or rulesets to the processing tree by clicking on the buttons on the right-hand side of the GUI.

In more details, the processing tree is shown in the main area of the GUI. The top filter node (nodes are containers of rules, please check the Definition in previous section for details) is the root one and therefore the first to be processed. Additional levels below contain further blocks with additional nodes, to allow for multiple processing of the events. Depending on the Edit mode being active or not, different interactions are possible with the processing tree.

When the Edit mode is disabled, the elements composing the Processing tree (Filters and rulesets) are shown in hierarchy. When clicking on a filter, its children–which can be other filters or rulesets–are shown; while a click on a ruleset will open the Rule Editor, to allow editing of the rules–refer to the next section for more information.

When clicking on Edit mode, it is possible to add new blocks in the processing tree or move them in other level. Using the buttons on the right-hand side of the Editor, new rulesets or filters can be added and placed within the hierarchy. For each node. it is possible to define:

  • a name and a description

  • its place in the hierarchy, by providing the name of the parent node

For a filter, these two more options are available.

  • whether it is active or not

  • the filter that should match the event. Syntax for the filter is JSON-based, and examples can be found in the various How-tos present in the tornado section of the User Guide.

Moreover, in Edit mode, in each box tree dots appear that when clicked, will open a small menu with two or three icons at the bottom of the box: Option are to edit or delete the ruleset or the filter, with the additional option, for ruleset only, to list rules:a click on the icon will open the rule GUI for editing the single rule of the ruleset.

Tornado Rule Editor

The tornado Rule Editor allows to manage the single rules that are in a ruleset. On top of the window, all the parents of the current ruleset are shown, to allow you to quickly check on which leaf of the processing tree the rules shown are located. Like in the processing tree editor, a JSON validator assists you in checking that the syntax of the rules is correct.

In the main area, all the defined rules are shown, together with a number of information about them: name, action, and status (enabled or not).

Like in the Processing Tree Editor, available options differ depending whether the Edit mode is active or not:

When the Edit mode is not active, it is possible to click on the Open test window button on the top right-hand side of the window to check which events the current rule selection would match.

With active Edit mode, the Open test window button is disabled, but new rules can be added, modified, or deleted; each rule can also can be moved along the list with a simple drag and drop.


The Icinga Director Background Daemon is responsible for running Jobs accoring our schedule. Director allows you to schedule eventually long-running tasks so that they can run in the background.

Currently this includes:

  • Import runs

  • Sync runs

  • Housekeeping tasks

  • Config rendering and deployment

This component is internally provided as a Hook. This allows other Icinga Web 2 modules to benefit from the Job Runner by providing their very own Job implementations.

Theory of operation

Jobs are configured via the Web frontend. You can create multiple definitions for the very same Job. Every single job will run with a configurable interval. Please do not expect this to behave like a scheduler or a cron daemon. Jobs are currently not executed in parallel. Therefore if one job takes longer, it might have an influence on the scheduling of other jobs.

Some of you might want actions like automated config deployment not to be executed all around the clock. That’s why you have the possibility to assign time periods to your jobs. Choose an Icinga timeperiod, the job will only be executed within that period.

Time periods

Icinga time periods can get pretty complex. You configure them with Director, but until now it didn’t have the necessity to “understand” them. This of course changed with Time Period support in our Job Runner. Director will try to fully “understand” periods in future, but right now it is only capable to interpret a limited subset of timeperiod range definitions.


Large parts of the Director’s functionality are also available as API on your CLI.

Manage Objects

Use icingacli director <type> <action> show, create modify or delete Icinga objects of a specific type:




Create a new object


Delete a specific object


Whether a specific object exists


Modify an existing objects properties


Show a specific object

Currently the following object types are available on CLI:

  • command

  • endpoint

  • host

  • hostgroup

  • notification

  • service

  • timeperiod

  • user

  • usergroup

  • zone

Create a new object

Use this command to create a new Icinga object


icingacli director <type> create [<name>] [options]




--<key> <value>

Provide all properties as single command line options


Otherwise provide all options as a JSON string


To create a new host you can provide all of its properties as command line parameters:

icingacli director host create localhost \
    --import generic-host \
    --address \
    --vars.location 'My datacenter'

It would say:

Host 'localhost' has been created

Providing structured data could become tricky that way. Therefore you are also allowed to provide JSON formatted properties:

icingacli director host create localhost \
    --json '{ "address": "", "vars": { "test": [ "one", "two" ] } }'
Delete a specific object

Use this command to delete a single Icinga object. Just run

icingacli director <type> delete <name>

That’s it. To delete the host created before, this would read

icingacli director host delete localhost

It will tell you whether your command succeeded:

Host 'localhost' has been deleted
Check whether a specific object exists

Use this command to find out whether a single Icinga object exists. Just run:

icingacli director <type> exists <name>

So if you run…

icingacli director host exists localhost

…it will either tell you …

Host 'localhost' exists


Host 'localhost' does not exist

When executed from custom scripts you could also just check the exit code, 0 means that the object exists, 1 that it doesn’t.

Modify an existing objects property

Use this command to modify specific properties of an existing Icinga object.


icingacli director <type> set <name> [options]




--<key> <value>

Provide all properties as single command line options

--append-<key> <value>

Appends to array values, like imports,

groups or vars.system_owners

--remove-<key> [<value>]

Remove a specific property, eventually only

when matching value. In case the property is an

array it will remove just value when given


Otherwise provide all options as a JSON string


Replace all object properties with the given ones


Create the object in case it does not exist


icingacli director host set localhost \
    --address \
    --vars.location 'Somewhere else'

It will either tell you

Host 'localhost' has been modified

or, when for example issued immediately a second time:

Host 'localhost' has not been modified

Like create, this also allows you to provide JSON-formatted properties:

icingacli director host set localhost --json '{ "address": "" }'

This command will fail in case the specified object does not exist. This is when the --auto-create parameter comes in handy. Command output will tell you whether an object has either been created or (not) modified.

With set you only set the specified properties and do not touch the other ones. You could also want to completely override an object, purging all other eventually existing and unspecified parameters. Please use --replace if this is the desired behaviour.

Show a specific object

Use this command to show single objects rendered as Icinga 2 config or in JSON format.


icingacli director <type> show <name> [options]





Resolve all inherited properties and show a flat object



Use JSON format


JSON is pretty-printed per default (for PHP >= 5.4)

Use this flag to enforce unformatted JSON


Per default JSON output skips null or default values

With this flag you will get all properties

Clone an existing object

Use this command to clone a specific object.


icingacli director <type> clone <name> --from <original> [options]




--from <original>

The name of the object you want to clone

--<key> <value>

Override specific properties while cloning


In case an object already exists replace it

with the clone


Do no keep inherited properties but create a flat

object with all resolved/inherited properties


icingacli director host clone localhost2 --from localhost
icingacli director host clone localhost3 --from localhost --address

Other interesting tasks

Rename objects

There is no rename command, but a simple set can easily accomplish this task:

icingacli director host set localhost --object_name localhost2

Please note that it is usually absolutely no problem to rename objects with the Director. Even renaming something essential as a template like the famous generic-host will not cause any trouble. At least not unless you have other components outside your Director depending on that template.

Disable an object

Objects can be disabled. That way they will still exist in your Director DB, but they will not be part of your next deployment. Toggling the disabled property is all you need:

icingacli director host set localhost --disabled

Valid values for booleans are y, n, 1 and 0. So to re-enable an object you could use:

icingacli director host set localhost --disabled n
Working with booleans

As we learned before, y, n, 1 and 0 are valid values for booleans. But custom variables have no data type. And even if there is such, you could always want to change or override this from CLI. So you usually need to provide booleans in JSON format in case you need them in a custom variable.

There is however one exception from this rule. CLI parameters without a given value are handled as boolean flags by the Icinga Web 2 CLI. That explains why the example disabling an object worked without passing y or 1. You could use this also to set a custom variable to boolean true:

icingacli director host set localhost --vars.some_boolean

Want to change it to false? No chance this way, you need to pass JSON:

icingacli director host set localhost --json '{ "vars.some_boolean": false }'

This example shows the dot-notation to set a specific custom variable. If we have had used { "vars": { "some_boolean": false } }, all other custom vars on this object would have been removed.

Change object types

The Icinga Director distincts between the following object types:




The default object type. A host, a command and similar


An Icinga template


An apply rule. This allows for assign rules

external_obj ect

An external object. Can be referenced and used, will not be


Example for creating a host template:

icingacli director host create 'Some template' \
    --object_type template \
    --check_command hostalive

Please take a lot of care when modifying object types, you should not do so for a good reason. The CLI allows you to issue operations that are not allowed in the web frontend. Do not use this unless you really understand its implications. And remember, with great power comes great responsibility.

Import/Export Director Objects

Some objects are not directly related to Icinga Objects but used by the Director to manage them. To make it easier for administrators to for example pre-fill an empty Director Instance with Import Sources and Sync Rules, related import/export commands come in handy.

Use icingacli director export <type> [options] to export objects of a specific type:




Export all DataField definitions


Export all DataList definitions


Export all IcingaTemplateChoiceHost definitions


Export all ImportSource definitions


Export all Job definitions


Export all SyncRule definitions





JSON is pretty-printed per default. Use this flag to

enforce unformatted JSON

Use icingacli director import <type> < exported.json to import objects of a specific type:




Import ImportSource definitions from STDIN


Import SyncRule definitions from STDIN

This feature is available since v1.5.0.

Director Configuration Basket

A basket contains a set of Director Configuration objects (like Templates, Commands, Import/Sync definitions - but not single Hosts or Services). This CLI command allows you to integrate them into your very own workflows

Available Actions




JSON-dump for objects related to the given Basket


List configured Baskets


Restore a Basket from JSON dump provided on STDIN


Take a snapshot for the given Basket





dump and snapshot require a specific object name

Use icingacli director basket restore < exported-basket.json to restore objects from a specific basket. Take a snapshot or a backup first to be on the safe side.

This feature is available since v1.6.0.

Health Check Plugin

You can use the Director CLI as an Icinga CheckPlugin and monitor your Director Health. This will run all or just one of the following test suites:




Configuration, Schema, Migrations


All configured Sync Rules (pending changes are not a problem)


All configured Import Sources (pending changes are not a problem)


All configured Jobs (ignores disabled ones)


Deployment Endpoint, last deployment outcome


icingacli director health check [options]




--check <name>

Run only a specific test suite

--<db> <name>

Use a specific Icinga Web DB resource


icingacli director health check

Example for running a check only for the configuration:

icingacli director health check --check config

Sample output:

Director configuration: 5 tests OK
[OK] Database resource 'Director DB' has been specified'
[OK] Make sure the DB schema exists
[OK] There are no pending schema migrations
[OK] Deployment endpoint is ''
[OK] There is a single un-deployed change

Kickstart and schema handling

The kickstart and the migration command are handled in the automation section of the upstream documentation.

Configuration handling

Render your configuration

The Director distincts between rendering and deploying your configuration. Rendering means that Icinga 2 config will be pre-rendered and stored to the Director DB. Nothing bad happens if you decide to render the current config thousands of times in a loop. In case a config with the same checksum already exists, it will store - nothing.

You can trigger config rendering by running

icingacli director config render

In case a new config has been created, it will tell you so:

New config with checksum b330febd0820493fb12921ad8f5ea42102a5c871 has been generated

Run it once again, and you’ll see that the output changes:

Config with checksum b330febd0820493fb12921ad8f5ea42102a5c871 already exists
Config deployment

You do not need to explicitely render your config before deploying it to your Icinga 2 master node. Just trigger a deployment, it will re-render the current config:

icingacli director config deploy

The output tells you which config has been shipped:

Config 'b330febd0820493fb12921ad8f5ea42102a5c871' has been deployed

Director tries to avoid needless deployments, so in case you immediately deploy again, the output changes:

Config matches active stage, nothing to do

You can override this by adding the --force parameter. It will then tell you:

Config matches active stage, deploying anyway

In case you want to do not want deploy to waste time to re-render your config or in case you decide to re-deploy a specific, eventually older config version the deploy command allows you to provide a specific checksum:

icingacli director config deploy --checksum b330febd0820493fb12921ad8f5ea42102a5c871
Deployments status

In case you want to fetch the information about the deployments status, you can call the following CLI command:

icingacli director config deploymentstatus
     "active_configuration": {
         "stage_name": "5c65cae0-4f1b-47b4-a890-766c82681622",
         "config": "617b9cbad9e141cfc3f4cb636ec684bd60073be1",
         "activity": "4f7bc6600dd50a989f22f82d3513e561ef333363"

In case there is no active stage name related to the Director, active_configuration is set to null.

Another possibility is to pass a list of checksums to fetch the status of specific deployments and (activity log) activities. Following, you can see an example of how to do it:

icingacli director config deploymentstatus \
 --configs 617b9cbad9e141cfc3f4cb636ec684bd60073be1 \
 --activities 4f7bc6600dd50a989f22f82d3513e561ef333363
     "active_configuration": {
         "stage_name": "5c65cae0-4f1b-47b4-a890-766c82681622",
         "config": "617b9cbad9e141cfc3f4cb636ec684bd60073be1",
         "activity": "4f7bc6600dd50a989f22f82d3513e561ef333363"
     "configs": {
         "617b9cbad9e141cfc3f4cb636ec684bd60073be1": "active"
     "activities": {
         "4f7bc6600dd50a989f22f82d3513e561ef333363": "active"

You can also decide to access directly to a value inside the result JSON by using the –key param:

icingacli director config deploymentstatus \
 --configs 617b9cbad9e141cfc3f4cb636ec684bd60073be1 \
 --activities 4f7bc6600dd50a989f22f82d3513e561ef333363 \
 --key active_configuration.config
Cronjob usage

You could decide to pre-render your config in the background quite often. As of this writing this has one nice advantage. It allows the GUI to find out whether a bunch of changes still results into the very same config. only one

Run sync and import jobs

Import Sources
  • List available Import Sources shows a table with your defined Import Sources, their IDs and current state. As triggering Imports requires an ID, this is where you can look up the desired ID:

    icingacli director importsource list
  • Check a given Import Source for changes fetches data from the given Import Source and compares it to the most recently imported data:

    icingacli director importsource check --id <id>




    --id <id>

    An Import Source ID. Use the list command to figure out


    Show timing and memory usage details

  • Fetch data from a given Import Source fetches data from the given Import Source and outputs them as plain JSON:

    icingacli director importsource fetch --id <id>




    --id <id>

    An Import Source ID. Use the list command to figure out


    Show timing and memory usage details

  • Trigger an Import Run for a given Import Source fetches data from the given Import Source and stores it to the Director DB, so that the next related Sync Rule run can work with fresh data. In case data didn’t change, nothing is going to be stored:

    icingacli director importsource run --id <id>




    --id <id>

    An Import Source ID. Use the list command to figure out


    Show timing and memory usage details

Sync Rules
  • List defined Sync Rules shows a table with your defined Sync Rules, their IDs and current state. As triggering a Sync requires an ID, this is where you can look up the desired ID:

    icingacli director syncrule list
  • Check a given Sync Rule for changes runs a complete Sync in memory but doesn’t persist eventual changes:

    icingacli director syncrule check --id <id>




    --id <id>

    A Sync Rule ID. Use the list command to figure out


    Show timing and memory usage details

  • Trigger a Sync Run for a given Sync Rule builds new objects according your Sync Rule, compares them with existing ones and persists eventual changes:

    icingacli director syncrule run --id <id>




    --id <id>

    A Sync Rule ID. Use the list command to figure out


    Show timing and memory usage details

Database housekeeping

Your database may grow over time and ask for various housekeeping tasks. You can usually store a lot of data in your Director DB before you would even notice a performance impact.

Still, we started to prepare some tasks that assist with removing useless garbage from your DB. You can show available tasks with:

icingacli director housekeeping tasks

The output might look as follows:

 Housekeeping task (name)                                  | Count
 Undeployed configurations (oldUndeployedConfigs)          |     3
 Unused rendered files (unusedFiles)                       |     0
 Unlinked imported row sets (unlinkedImportedRowSets)      |     0
 Unlinked imported rows (unlinkedImportedRows)             |     0
 Unlinked imported properties (unlinkedImportedProperties) |     0

You could run a specific task with

icingacli director housekeeping run <taskName>

…like in:

icingacli director housekeeping run unlinkedImportedRows

Or you could also run all of them, that’s the preferred way of doing this:

icingacli director housekeeping run ALL

Please note that some tasks once issued create work for other tasks, as lost imported rows might appear once you remove lost row sets. So ALL is usually the best choice as it runs all of them in the best order.

The Icinga Director REST API

Icinga Director has been designed with a REST API in mind. Most URLs you can access with your browser will also act as valid REST url endpoints.

Base Headers

All your requests MUST have a valid accept header. The only acceptable variant right now is application/json, so please always append a header as follows to your requests:

Accept: application/json

Please use HTTP authentication and any valid Icinga Web 2 user, granted enough permissions to accomplish the desired actions. The restrictions and permissions that have been assigned to web users will also be enforced for API users. In addition, the permission director/api is required for any API access.


There are no version strings so far in the Director URLs. We will try hard to not break compatibility with future versions. Sure, sooner or later we also might be forced to introduce some kind of versioning. But who knows?

As a developer you can trust us to not remove any existing REST url or any provided property. However, you must always be ready to accept new properties.

URL scheme and supported methods

We support GET, POST, PUT and DELETE.




Read / fetch data. Not allowed to run operations with the potential to cause any harm


Trigger actions, create or modify objects. Can also be used to partially modify objects


Creates or replaces objects, cannot be used to modify single object properties


Remove a specific object

POST director/host
gives 201 on success
GET director/host?
PUT director/host?
gives 200 ok on success and 304 not modified on no change
DELETE director/host?
gives 200 on success
First example request with CURL
curl -H 'Accept: application/json' \
     -u 'username:password' \
CURL helper script

A script like the following makes it easy to play around with curl:


test -z "$BODY" && curl -u "$USERNAME" \
  -i https://icingaweb/icingaweb/$URL \
  -H 'Accept: application/json' \

test -z "$BODY" || curl -u "$USERNAME" \
  -i https://icingaweb/icingaweb/$URL \
  -H 'Accept: application/json' \
  -X $METHOD \
  -d "$BODY"


It can be used as follows:

director-curl GET director/host?name=localhost

director-curl POST director/host '{"object_name": "host2", "... }'
Should I use HTTPS?

Sure, absolutely, no doubt. There is no, absolutely no reason to NOT use HTTPS these days. Especially not for a configuration tool allowing you to configure check commands that are going to be executed on all your servers.

Special parameters for Icinga Objects
  • Resolve object properties. In case you add the resolve parameter to your URL, all inherited object properties will be resolved. Such a URL could look as follows:

  • Retrieve all properties. Per default properties with null value are skipped when shipping a result. You can influence this behavior with the properties parameter. Just append properties=ALL to your URL:

  • Retrieve only specific properties. The properties parameter also allows you to specify a list of specific properties. In that case, only the given properties will be returned, even when they have no (null) value:



GET director/host?

  "address": "",
  "check_command": null,
  "check_interval": null,
  "display_name": "pe2015 (",
  "enable_active_checks": null,
  "flapping_threshold": null,
  "groups": [ ],
  "imports": [
  "retry_interval": null,
  "vars": {
    "facts": {
      "aio_agent_build": "1.2.5",
      "aio_agent_version": "1.2.5",
      "architecture": "amd64",
      "augeas": {
        "version": "1.4.0"


    "address": "",
    "check_command": "tom_ping",
    "check_interval": "60",
    "display_name": "pe2015 (",
    "enable_active_checks": true,
    "groups": [ ],
    "imports": [
    "retry_interval": "10",
    "vars": {
      "facts": {
        "aio_agent_build": "1.2.5",
        "aio_agent_version": "1.2.5",
        "architecture": "amd64",
        "augeas": {
          "version": "1.4.0"

JSON is pretty-printed per default, at least for PHP >= 5.4

Error handling

Director tries hard to return meaningful output and error codes:

HTTP/1.1 400 Bad Request
Server: Apache
Content-Length: 46
Connection: close
Content-Type: application/json
    "error": "Invalid JSON: Syntax error"
Trigger actions

You can of course also use the API to trigger specific actions. Deploying the configuration is as simple as issuing::

POST director/config/deploy


Currently we do not handle Last-Modified and ETag headers. This would involve some work, but could be a cool feature. Let us know your ideas!

Sample scenario

Let’s show you how the REST API works with a couple of practical examples:

Create a new host

POST director/host
  "object_name": "apitest",
  "object_type": "object",
  "address": "",
  "vars": {
    "location": "Berlin"


HTTP/1.1 201 Created
Date: Tue, 01 Mar 2016 04:43:55 GMT
Server: Apache
Content-Length: 140
Content-Type: application/json
    "address": "",
    "object_name": "apitest",
    "object_type": "object",
    "vars": {
        "location": "Berlin"

The most important part of the response is the response code: 201, a resource has been created. Just for fun, let’s fire the same request again. The answer obviously changes:

HTTP/1.1 500 Internal Server Error
Date: Tue, 01 Mar 2016 04:45:04 GMT
Server: Apache
Content-Length: 60
Connection: close
Content-Type: application/json
    "error": "Trying to recreate icinga_host (apitest)"

So, let’s update this host. To work with existing objects, you must ship their name in the URL:

POST director/host?name=apitest
  "object_name": "apitest",
  "object_type": "object",
  "address": "",
  "vars": {
    "location": "Berlin"

Same body, so no change:

HTTP/1.1 304 Not Modified
Date: Tue, 01 Mar 2016 04:45:33 GMT
Server: Apache

So let’s now try to really change something:

POST director/host?name=apitest
{"address": "", "vars.event": "Icinga CAMP" }

We get status 200, changes have been applied:

HTTP/1.1 200 OK
Date: Tue, 01 Mar 2016 04:46:25 GMT
Server: Apache
Content-Length: 172
Content-Type: application/json
    "address": "",
    "object_name": "apitest",
    "object_type": "object",
    "vars": {
        "location": "Berlin",
        "event": "Icinga CAMP"

The response always returns the full object on modification. This way you can immediately investigate the merged result. As you can see, POST requests only touch the parameters you passed - the rest remains untouched.

One more example to prove this:

POST director/host?name=apitest
{"address": "", "vars.event": "Icinga CAMP" }

No modification, you get a 304. HTTP standards strongly discourage shipping a body in this case:

HTTP/1.1 304 Not Modified
Date: Tue, 01 Mar 2016 04:52:05 GMT
Server: Apache

As you might have noted, we only changed single properties in the vars dictionary. Now lets override the whole dictionary:

POST director/host?name=apitest
{"address": "", "vars": { "event": [ "Icinga", "Camp" ] } }

The response shows that this works as expected:

HTTP/1.1 200 OK
Date: Tue, 01 Mar 2016 04:52:33 GMT
Server: Apache
Content-Length: 181
Content-Type: application/json
    "address": "",
    "object_name": "apitest",
    "object_type": "object",
    "vars": {
        "event": [

If merging properties is not what you want, PUT comes to the rescue:

PUT director/host?name=apitest
{ "vars": { "event": [ "Icinga", "Camp" ] }

All other properties vanished, all but name and type:

HTTP/1.1 200 OK
Date: Tue, 01 Mar 2016 04:54:33 GMT
Server: Apache
Content-Length: 153
Content-Type: application/json
    "object_name": "apitest",
    "object_type": "object",
    "vars": {
        "event": [

Let’s put “nothing”:

PUT director/host?name=apitest

Works as expected:

HTTP/1.1 200 OK
Date: Tue, 01 Mar 2016 04:57:35 GMT
Server: Apache
Content-Length: 62
Content-Type: application/json
    "object_name": "apitest",
    "object_type": "object"

Of course, PUT also supports 304, you can check this by sending the same request again.

Now let’s try to cheat:

KILL director/host?name=apitest
HTTP/1.1 400 Bad Request
Date: Tue, 01 Mar 2016 04:54:07 GMT
Server: Apache
Content-Length: 43
Connection: close
Content-Type: application/json
    "error": "Unsupported method KILL"

Ok, no way. So let’s use the correct method:

DELETE director/host?name=apitest
HTTP/1.1 200 OK
Date: Tue, 01 Mar 2016 05:59:22 GMT
Server: Apache
Content-Length: 109
Content-Type: application/json
    "imports": [
    "object_name": "apitest",
    "object_type": "object"

Service Apply Rules

Please note that Service Apply Rule names are not unique in Icinga 2. They are not real objects, they are creating other objects in a loop. This makes it impossible to distinct them by name. Therefore, a dedicated REST API endpoint director/serviceapplyrules ships all Service Apply Rules combined with their internal ID. This ID can then be used to modify or delete a Rule via director/service.

Deployment Status

In case you want to fetch the information about the deployments status, you can call the following API:

GET director/config/deployment-status
HTTP/1.1 200 OK
Date: Wed, 07 Oct 2020 13:14:33 GMT
Server: Apache
Content-Type: application/json
    "active_configuration": {
        "stage_name": "b191211d-05cb-4679-842b-c45170b96421",
        "config": "617b9cbad9e141cfc3f4cb636ec684bd60073be1",
        "activity": "028b3a19ca7457f5fc9dbb5e4ea527eaf61616a2"

This throws a 500 in case Icinga isn’t reachable. In case there is no active stage name related to the Director, active_configuration is set to null.

Another possibility is to pass a list of checksums to fetch the status of specific deployments and (activity log) activities. Following, you can see an example of how to do it:

GET director/config/deployment-status?config_checksums=617b9cbad9e141cfc3f4cb636ec684bd60073be2,
    "active_configuration": {
        "stage_name": "b191211d-05cb-4679-842b-c45170b96421",
        "config": "617b9cbad9e141cfc3f4cb636ec684bd60073be1",
        "activity": "028b3a19ca7457f5fc9dbb5e4ea527eaf61616a2"
    "configs": {
        "617b9cbad9e141cfc3f4cb636ec684bd60073be2": "deployed",
        "617b9cbad9e141cfc3f4cb636ec684bd60073be1": "active"
    "activities": {
        "617b9cbad9e141cfc3f4cb636ec684bd60073be1": "undeployed",
        "028b3a19ca7457f5fc9dbb5e4ea527eaf61616a2": "active"

The list of possible status is:

  • active: whether this configuration is currently active

  • deployed: whether this configuration has ever been deployed

  • failed: whether the deployment of this configuration has failed

  • undeployed: whether this configuration has been rendered, but not yet deployed

  • unknown: whether no configurations have been found for this checksum

Agent Tickets

The Director is very helpful when it goes to manage your Icinga Agents. In case you want to fetch tickets through the API, please do as follows:

GET director/host/ticket?name=apitest
HTTP/1.1 200 OK
Date: Thu, 07 Apr 2016 22:19:24 GMT
Server: Apache
Content-Length: 43
Content-Type: application/json

Please expect an error in case the host does not exist or has not been configured to be an Icinga Agent.

Self Service API

  • Theory of operation

    Icinga Director offers a Self Service API, allowing new Icinga nodes to register themselves. No credentials are required, authentication is based on API keys. There are two types of such keys:

    • Host Template API keys

    • Host Object API keys

    Template keys basically grant the permission to:

    • Create a new host based on that template

    • Specify name and address properties for that host

    This is a one-time operation and allows one to claim ownership of a specific host. Now, there are two possible scenarios:

    • The host already exists

    • The host is not known to Icinga Director

    In case the host already exists, Director will check whether it’s API key matches the given one. [..]

  • Request processing for Host registration

    A new node will POST to self-service/register-host, with two parameters in the URL:

    • name: it’s desired object name, usually the FQDN

    • key: a valid Host Template API key

    In it’s body it is allowed to specify a specific set of properties. At the time of this writing, these are:

    • display_name

    • address

    • address6

    Director will validate the key and load the corresponding Host Template. In case no such is found, the request is rejected. Then it checks whether a Host with the given name exists. In case it does, the request is rejected unless:

    • It inherits the loaded Host Template

    • It already has an API key

    If these conditions match, the request is processed. The following sketch roughly shows the decision tree (AFTER the key has been validated):


Self Service API

Icinga Director offers a Self Service API, allowing new Hosts running the Icinga Agent to register themselves in a secure way.

Windows Agents

Windows Agents are the main target audience for this feature. It allows you to generate a single Powershell Script based on the Icinga 2 Powershell Module. You can either use the same script for all of your Windows Hosts or generate different ones for different kind of systems.

This installation script could then be shipped with your base images, invoked remotely via PowerShell Remoting, distributed as a module via Group Policies and/or triggered via Run-Once (AD Policies).

Linux Agents

At the time of this writing, we do not ship a script with all the functionality you can find in the Windows Powershell script. Linux and Unix environments are mostly highly automated these days, and such a magic shell script is often not what people want.

Still, you can also benefit from this feature by directly using our Self Service REST API. It should be easy to integrate it into the automation tool of your choice.

Base Configuration

You have full control over the automation Script generated by the Icinga Director. Please got to the Infrastructure Dashboard and choose the Self Service API:

Infrastructure Dashboard - Self Service API

Fig. 105 Infrastructure Dashboard - Self Service API

This leads to the Self Service API Settings form. Most settings are self-explaining and come with detailled inline hints. The most important choice is whether the script should automatically install the Icinga Agent:

Settings - Choose installation source

Fig. 106 Settings - Choose installation source

In case you opted for automated installation, more options will pop up:

Settings - Installer Details

Fig. 107 Settings - Installer Details

The Icinga Director “Live-Creation” experimental feature permits to create Icinga Objects in the Director and simultaneously also directly in Icinga2, without the need to deploy the Director configuration.

The Live-Creation is available both from icingacli and from the Director REST API.

Below you see a flowchart which gives you an idea of what happens when the Object Live-Creation is called from the Director REST API.

Import source

Fig. 108 Import source

CLI Commands

Fetch all available Virtual Machines

This command is mostly for test/debug reasons and gives you an output of all Virtual Machines with a default set of properties:

icingacli vsphere fetch virtualmachines [options]

The available options are:



--<vhost> <host>

IP, host or URL for your vCenter or ESX host

--<username> <user>

When authenticating, this username will be used

--<password> <pass>

The related password


Replace id-references with their name


Accept certificates signed by an unknown CA


Accept certificates not matching the host name


Use plaintext HTTP requests

--proxy <proxy>

Use the given Proxy (ip, host or host:port

--proxy-type <type>

HTTP (default) or SOCKS5

--proxy-username <user>

Username for authenticated HTTP proxy

--proxy-password <pass>

Password for authenticated HTTP proxy


Show resource usage summary


Dump JSON output

Fetch all available Host Systems

This command is mostly for test/debug reasons and gives you an output of all Host Systems with a default set of properties:

icingacli vsphere fetch hostsystems [options]

The available options are:



--<vhost> <host>

IP, host or URL for your vCenter or ESX host

--<username> <user>

When authenticating, this username will be used

--<password> <pass>

The related password


Accept certificates signed by an unknown CA


Accept certificates not matching the host name


Use plaintext HTTP requests

--proxy <proxy>

Use the given Proxy (ip, host or host:port

--proxy-type <type>

HTTP (default) or SOCKS5

--proxy-username <user>

Username for authenticated HTTP proxy

--proxy-password <pass>

Password for authenticated HTTP proxy


Show resource usage summary


Dump JSON output

Shutdown Management Rest API

Shutdown can be executed on an host using a REST API. Currently, the following calls are available.

Trigger Shutdown Definition

Endpoint: trigger-shutdown-definition

This endpoint enables you to trigger an asynchronous run of a shutdown definition via a REST API call. The call is non blocking and the shutdown will be performed in background.

Parameters: * id: The ID of the shutdown definition


curl -u root:xxx -H 'Accept: application/json' https://localhost/neteye/shutdownmanager/api/trigger-shutdown-definition?id=1

Configuring Icinga Monitoring Retention Policy

This section describes how to configure and applying the retention time for Icinga Data Output (IDO) DB.

The retention time is by default set to 550 days, meaning that the data stored in IDO DB are kept for 550 days and deleted afterwards. This will affect the monitoring data history, that are used for populating SLM reports. This value can be changed at a later point, customize it based on user’s needs.


In the 4.13 release a new setting has been introduced, called retention policy. The reason for its introduction is that the all data (warnings, logs, output of checks, and so on) are stored in the Database, which can dramatically grow over time. This setting will be enabled by default in new installation only, while existing installation must configure it and enable it, following the steps described next. The default value given to this setting is 550 days, but can be changed at a later point.

How To Set The Retention Time

To configure or modify the value, go to Configuration > Modules > Neteye > Configuration.

  • Step 0. Click on the slider called Enable default retention policy to enable the feature. The slider will disappear afterwards. Indeed, once the feature is enabled, there is no rolling back, it will remain enabled forever.

  • Step 1. Insert a value in days for the Default retention policy age, which by default is 550, i.e., one year.

  • Step 2. Click on the Save Changes button.

  • Step 4. Go to the command line and run the neteye_secure_install script.


Only after Step 4 the setting will be applied and enabled, so make sure to complete successfully all the steps!

How To Disable the Retention Time

The retention time can be disabled, meaning that no data will ever be deleted. To do so, set the value of retention time to 0 (zero).


Disabling the retention time is discouraged, because the disk space required by the Databases might grow quickly if the monitoring activities on NetEye create a lot of input.

Deprecated modules



This module is deprecated, please consider switching to the new Tornado module, which is actively developed.

The NetEye strategy for interpreting externally generated events received by the NetEye server is to allow user-defined matching rules and handlers to accurately use event sources and properties to determine how urgent an event is and then take the appropriate action.

Events can be Messages generated by various computer systems and devices. They do not need to be Windows or Linux Servers, but can also be Firewalls, Security devices, network devices or other kinds of embedded systems that are able to send notifications and alerts using one of the following communication channels:

  • SNMP Trap

  • Log message (Syslog protocol)

  • Email message

  • SMS

  • Remote Procedure Call, API call or script (sent via the EventConsole command)

Notifications are sent when a device or system encounters a pre-defined error or warning condition:

EventHandler module

Fig. 109 The EventHandler module at a glance.

These incoming events can be accepted, filtered and classified by the EventHandler module. The EventHandler rules allow NetEye to classify the incoming event and decide whether to forward it to the EventConsole, raise an alert within Monitoring, or notify a person directly via Email or SMS.

Event messages are managed in the EventConsole of the EventHandler:

  • The EventConsole serves as a container for incoming Messages

  • It can receive Messages from:

    • The NetEye EventHandler (SNMP traps, Log messages, Email and SMS notifications)

    • Remote clients (e.g. Windows, Unix and Linux) via nesendmsg

  • It propagates warnings and critical messages (under the host name EventConsole)

    • Messages can be acknowledged to avoid further notifications

    • Messages can be closed

  • It enables the Monitoring notification and reporting logic

  • It can be used for:

    • Collecting batch script output

    • Finding event-triggered system notifications

    • Checking log files

  • The list is keyed on Host – Subject pairs

NetEye’s EventHandler allows you to configure automatic reactions to events coming from SNMP traps, email, SMS’s and logs. The EventHandler Dashboard (Fig. 110) and GUI provide a user-friendly way to configure these different actions.

The EventHandler Dashboard

Fig. 110 The EventHandler Dashboard

Within this module you can define specific rules for any type of event. Once an event takes place, the rule-matching engine searches for predefined rules and takes the corresponding action. These actions can range from sending an email or SMS, to displaying the event within the EventConsole, to even ignoring it completely.

To access the EventHandler, simply click on “Event Handler” on the navigation bar at the left, then select the type of event source you want to configure rules for. Below the list of event type configuration options is the Event Lifetracker, which shows the quantity and type of events seen by the system in the last 30 seconds. When you use NetEye 4 for the first time, the Lifetracker will be empty because there are no rules yet defined that match with incoming events.

You can configure the EventHandler and its individual modules using the appropriate panel as shown in Fig. 111 by following this sequence from the left-side navigation bar: Configuration> Modules > eventhandler > Configuration tab.

EventHandler Configuration

Fig. 111 EventHandler Configuration

Definition and Management of Rules

Clicking on one of the four event type configuration icons will open a new panel to the right with the appropriate tab already visible. For instance, Fig. 112 shows the Trap Handler rule list panel after clicking on the Trap icon in the EventHandler Dashboard. The panels for all four event types have the same interface for viewing the rules and changing their priorities, but different panels for the actual details of the rules.

Each rule in the list view is shown with its description, an action type, a regular expression that causes the rule to match an event, and Options for changing the rule’s priority relative to other rules. The rules are ordered from highest priority (top line of the first page of rules) to lowest priority (bottom line in the last page of rules).

Rules for the Trap Handler

Fig. 112 Rules for the Trap Handler

Adding, Modifying and Copying Rules

From the EventHandler Dashboard, select the tab of the desired event type (Trap, Email, SMS or Log) and click the “+ Add” button. The “Add rule” panel will appear on the right.

Adding a new rule to the Trap Handler in the Details pane

Fig. 113 Adding a new rule to the Trap Handler in the Details pane

Here you can set the attributes of the new rule, such as the event match expression in regular expression syntax.

At the bottom of the Rule subpanel you can select advanced properties. For instance, for trap rules you can define the Host IP, OID, etc., while for email rules you can select From, To, etc. For each property you select, a new field will appear below, allowing you to set values to match those fields.

If an incoming event matches the regular expression and other parameters you set in your new rule, and subject to higher priority rules having already matched it, then the action corresponding to that rule as defined in the Action subpanel below it will be carried out.

Depending on the type of action chosen in the Action panel, different attributes are available (for instance, for monitoring you can define the Host, Service, Status and Message). Some of these options use the drop down with custom field interface control to allow you to either select a pre-defined item/value, or enter a custom value that can include variables.

If you change one or more rule attributes and click on the green “Submit” button (Fig. 113), the new rule will be activated the next time that NetEye 4 restarts. To force all modified rules to be activated immediately, click on the “Commit Settings” menu item found in the “v” menu to the right of the panel title bar.

Rule Matching

For more complicated events where matching a regular expression in the input title is not sufficient, the EventHandler allows access to additional fields depending on the event type.

Constructing Trap Rule Matching Expressions with Built-In Variables




The description related to the OID (object identifier)


The object identifier (OID)


Trap description field of snmptranslate


System description from trap itself


The Unix timestamp when the trap was received


The host name that was extracted


Trap translate text of snmptranslate


All the trap content, in a single line


All the trap content, lines separated by carriage returns (CR)


All the trap content, single line, lines numbered


All the trap content, multi-line (CR) and numbered

You can use several placeholders within one of the allowed input fields, and it is also possible to combine them with the following patterns:

Pattern definition

A trap handler pattern consists of an opening tag (\@) followed by a text constant that will always come before the matching string, an ‘@’ separator, a regular expression (e.g. “.*”, without the quotes), another @ separator, the text constant that comes afterwards, and finally the closing tag (i.e. \@).

Thus the syntax should look like this:


where regex can be any Perl regular expression and before and after should normally be text constants.

Note: These patterns are not allowed in the regular expression input field.


By checking the continue option in the trap definition, the trap handler will not stop after the match but will instead continue and try to match additional trap handling rules (if any) within the table that have lower priority. In any event, the actions that the trap handler takes will be the same.

The NetEye Trap Handler also supports more specific and flexible rules in comparison to general regular expressions. By using the advanced rules of the Trap Handler frontend, the user can define a line-by-line matching of trap files. At the same time it is possible to set trap-specific user variables and placeholders. These variables can be used like the predefined variables in the Subject, Message, Host and Service fields.

For the advanced rule definition you can skip the “@” before and after the variable name, but within the above-mentioned fields it is required.

The advanced rules must all match (think of there being an AND between them), otherwise the rule will not match and the trap handler will proceed to the next definition. Of course, the (general) regular expression must also match, but you can for instance use “.*” to force an advanced rule check only.

It is also possible to specify several advanced rules for the same trap line.

Substring Access

To access a substring of an extracted string, indicate the desired position as follows:


where VARNAME is the name of the variable (second input field) and 1 is the position in the string.

A suitable regular expression for this usage would be for example:






Problem.found for Host.

In this example @myVar_0@ contains the full matched line, @myVar_1@ contains the substring that contains the problem description, and @myVar_2@ could contain some host information.

Tip: To avoid unexpected behavior you should not “override” the predefined variables (e.g. @TRAPHOST@) even though the system will allow it. It is also possible to specify “.*” for any line-matching rule, although in this case you should set a variable such that the data will be extracted. You can also omit the variable name, which implies that this rule serves only as a filter.


As described above, the Action subpanel allows you to select the type of action to perform if the rule above it is successfully matched. The fields that are visible depend on the type of action selected. For instance, if the “Ignore” action is selected then no fields will be shown, while if “Email” is selected as in Fig. 114, you will see the appropriate fields such as “To” and “From”.

The action panel for an EventHandler rule

Fig. 114 The action panel for an EventHandler rule

You can use “extracted variables” inside each parameter definition.

Extracted Variables

If you have used the test panel to find rules that match an event, you can use the “Extracted Variables” pane to see the values that variables had during the matching process for a particular rule. These variables include both original values from the event as well as values computed during the matching process.

The Extracted Variables panel

Fig. 115 The Extracted Variables panel showing names and values of variables regarding a particular event.

Within the “Extracted Variables” pane, all variables are reported. The set of default variables, as well as the additional variables, deriving from the definition of the rule attributes are displayed.

The Rule Testing Panel

The “Test” panel can be opened by clicking on the “Test” button ( ) under the panel title. The “Test Area” panel will then appear on the right side as in Fig. 117.

The EventHandler Test panel

Fig. 116 The EventHandler Test panel

To test a particular event against the rules, put some event text into the text area labeled “Content” and click on the green “Test” button below it. This will allow you to see the flow of rules in the main panel as the EventHandler attempts to match them to your event.

Colors showing which rules matched

Fig. 117 Colors showing which rules matched the incoming test message.

In the grid there are three colors:

  • Red: The rule did not match

  • Orange: The rule partially matched

  • Green: The rule fully matched

When you click on the row you are interested in, the details panel on your right hand side will appear (if not, simply press on the collapsed box to open it). Here you can visualize which specific fields/attributes did match (green), partially matched (orange), and did not match (red) as shown in the Details panel in Fig. 117.

Testing Rules From Archived Events

In addition to manually entering free-form text for an event in the Content panel, you can also use the Archive tab in the Test panel to select a real historical event from a log archive. Clicking on a day, and then on an event for that day, will transfer that event text into the Content panel, and you can then press the “Test” button as above.

In the Options section, you can select which types of rules to show in the main panel for following the matching process: (1) only the rules that matched, (2) the flow of matching rules, or (3) all rules. If you set the “Execute action(s)” flag, the defined action will be executed, even if it is a historical event.

Archive tab of Testing pane

Fig. 118 The Archive tab of the Testing pane.

Network interfaces monitoring


This module is deprecated.

Service Checks Information

Network Interface Table Service Check


Please, note that Interface Table module may be exploited by an authenticated user for accessing/executing files on the file system. When Interface Table Service Check is installed, a management network with restricted access is encouraged.

Please note that a new version which overcomes the security problems has been planned in the next releases.

Interfacetable_v3t is an addon that allows you to monitor the network interfaces of a node (e.g. router, switch, server) without knowing each interface in detail.

For additional information see the the official documentation.


# yum install --enablerepo=neteye-extras icingaweb2-module-interfacetable icingaweb2-module-interfacetable-autosetup
# neteye_secure_install


The module is enabled as soon as the neteye_secure_install is executed, but a new command must be configured before it can be used for configuring a service check. Navigate to Director > Commands to create the command as depicted in Fig. 119, which shows the three fields that must be configured:

  • Command type: Must be set to Plugin Check Command

  • Command name: The name of the command which will be used to configure a Service

  • Command: The path to the script executed by the command:

Add Interface Table command

Fig. 119 Add Interface Table command

After deploying the command you just created, you must configure the Arguments. Click on the command name in the Commands section of Director, go to the Arguments tab and add the Arguments:

Argument name





Specify the snmp v1/v2c community string (e.g., wuerthphoenix)


Specify global plugin timeout

Fig. 120 shows the resulting Arguments panel

Result after adding arguments

Fig. 120 Result after adding arguments