User Guide

Infrastructure

Deployment

In Icinga2, configuration changes are stored in working memory until they are deployed, i.e., the Icinga2 configuration is generated and distributed to all agents in the zone. Initiating a deploy action will check configuration validity, recording the details of the outcome. There are two principal elements, both reachable from the main Director menu:

  • Deployment: Manage the actions around deploying a new configuration.

  • Activity Log: Look at the deployment history, and carry out actions on that history.

Deploying a Configuration

To deploy a modified configuration, go to Director > Deployments. There you will see the Deployments tab (Fig. 18) that shows all recent successful (green check) and failed (red ‘X’) deployment attempts along with the date and time of those actions. Clicking on the link will take you to the “Deployment details” panel that additionally contains any warnings that may have been issued, and a ‘Configuration’ link that will show you which configuration files were modified when the deployment occurred.

The deployments outcome panel

Fig. 18 The deployments outcome panel

Now click on the Render config action, which displays the “Generated config” panel, a summary of the newly generated configuration (Fig. 19). This panel contains a list of the files involved in the configuration changes, including a summary of the number of objects per file by type (objects, templates, or apply rules), and the new size of that file.

Deployment outcome panel

Fig. 19 Deployment outcome panel

You will now see three new actions:

  • Deploy pending changes: Implement the deployment, which distributes the configuration to all Icinga2 agents. You can distribute the deployment even if there are no changes to make.

  • Last related activity: Show differences between the current configuration and the most recent configuration before the current one.

  • Diff with other config: Compare any two configurations using their unique identifiers (the numbers in parentheses in Fig. 18). The current configuration is inserted by default.

Activity Log

The Activity Log Panel (Fig. 20) lets you look at the history of successful deployments, and carry out actions on items in that history.

The My changes action lets you switch from showing the history of all changes to the view showing only those changes you made. You can then click on All changes to return to viewing changes made by all users.

The activity log panel

Fig. 20 The activity log panel

Each row represents a successful change to an object, coded for action type, user (“cli” indicates an automated action) and time. The action types are:

  • Create (a blue “+” icon)

  • Modify (a green “wrench” icon)

  • Delete (a red “x” icon)

A duplicate timestamp over consecutive rows indicates those objects were deployed at the same time. Clicking on the modify action type in particular will take you to the Diff panel (Fig. 21) that will detail exactly what changes were made.

Showing the differences between before and after configurations

Fig. 21 Showing the differences between before and after configurations

Once you have completed a successful deployment of monitoring objects in Director, you can then go to the host monitoring panel (i.e.,g click on a host under Icinga Director ‣ Hosts ‣ Hosts) to check on the success of the overall monitoring configuration.

Satellite Nodes

Prerequisites

A Satellite is a NetEye instance which depends on a main NetEye installation, the Master (which, in the case of a cluster is also the Master node), and carries out tasks such as:

  • execute Icinga 2 checks and forward results to the Master

  • collect logs and forward them to the Master

  • forward data through NATS

It is required that both the Master and the Satellite be equipped with the same NetEye version.

Warning

Before proceeding with the Satellite configuration procedure, if you are in a NetEye cluster environment, remember to put all nodes into standby except the one on which the configuration steps are executed (the Master).

The remainder of this section will lead you through the steps needed to configure a new NetEye Satellite.

For further information about NetEye Satellites, refer to Section Satellite.

Configuration of a Satellite

To configure a new Satellite (we’ll call the Satellite acmesatellite), create the configuration file /etc/neteye-satellite.d/acmesatellite.conf on the Master. The basename of the file, without the trailing .conf, will be used as Satellite unique identifier (acmesatellite).

Warning

Use a meaningful name for each Satellite you want to configure, which must also satisfy the following requirements:

  • matching the following regex /^[a-zA-Z0-9]+$/, i.e., it must contain only alphanumeric characters

  • not containing the master keyword

The configuration file must have the following content:

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true
}

The configuration file of the Satellite must contain the following attributes:

  • fqdn: this is the Satellite fully qualified domain name.

  • name: this is the Satellite name, it must coincide with the configuration file name.

  • ssh_port: this is the port to use for SSH from Master to Satellite. Specify a different port in case of custom SSH configurations.

  • ssh_enabled: if set to true, SSH connections from Master to the Satellite can be established. If set to false, configuration files for the Satellite must be manually copied from Master.

Note

Remember to append the FQDN of the Satellite in /etc/hosts. If you are in a cluster environment you must change the /etc/hosts on each node of the cluster.

If you are installing a Satellite within a cluster, run the following command to synchronise the files /etc/neteye-satellites.d/* and /etc/neteye-cluster on all cluster nodes:

neteye config cluster sync

Generate the Satellite Configuration Archive and Configure the Master

To generate the configuration files for the acmesatellite Satellite and to reconfigure Master services, run the following command on the Master:

neteye satellite config create acmesatellite

The command generates all the required configuration files for the Satellite, which are stored in /root/satellite-setup/config/<neteye_release>/acmesatellite-satellite-config.

The command executes the Master autosetup scripts located in /usr/share/neteye/secure_install_satellite/master/, automatically reconfiguring services to allow the interaction with the Satellite.

For example, the NATS Server is reconfigured to authorize leaf connections from acmesatellite, while streams coming from the Satellite are exported in order to be accessible from Tornado or Telegraf consumers.

Note

A pre-existing NATS Server configuration must be migrated before configuring any new Satellite. Please refer to this section for the migration procedure.

to understand how to migrate your NATS Server configuration.

A new Telegraf local consumer is also automatically started and configured or each Satellite, to consume metrics coming from the Satellite through NATS and to write them to InfluxDB. In our example, the Telegraf instance is called telegraf-local@neteye_consumer_influxdb_acmesatellite after Satellite name.

Note

If you are in a cluster environment, an instance of Telegraf local consumer is started on each node of the cluster, to exploit the NATS built-in load balancing feature called distributed queue. For more information about this feature, see the official NATS documentation

The command also creates an archive containing all the configuration files, in order to easily move them to the Satellite. The archive can be found at /root/satellite-setup/config/<neteye_release>/acmesatellite-satellite-config.tar.gz

Alternatively, configurations can be generated for all the Satellites defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config create --all

Synchronize the Satellite Configuration Archive

To synchronize the configuration files between the Master and the acmesatellite Satellite, provided ssh_enabled is set to TRUE, run the following command on the Master:

neteye satellite config send acmesatellite

Note

If the attribute ssh_port is not defined, the default SSH port (22) is used. if the attribute ssh_enabled is set to FALSE or not defined for a specific Satellite, the configuration archive must be manually copied before proceeding.

The command uses the unique ID of the Satellite to retrieve the connection attributes from the Satellite configuration file /etc/neteye-satellites.d/acmesatellite.conf, and uses them to send the archive acmesatellite-satellite-config.tar.gz to the Satellite.

Alternatively, configuration archives can be sent to all Satellites defined in /etc/neteye-satellites.d/ by typing:

neteye satellite config send --all

Satellite Setup

Configure the acmesatellite Satellite with the following command on the Satellite itself:

neteye satellite setup

This command performs three actions:

  • Copies the configuration files in the correct places overriding current configurations, if any.

  • Creates a backup of the configuration for future use in /root/satellite-setup/config/<neteye_release>/satellite-config-backup-<timestamp>/

  • Execute autosetup scripts located in /usr/share/neteye/secure_install_satellite/satellite/

To execute this command the configuration archive must be located in /root/satellite-setup/config/<neteye_release>/satellite-config.tar.gz. Use neteye satellite config send command or copy the archive manually if no SSH connection is available.

Note

Configuration provided by the Master is not user customizable: any change will be overwritten by the new configuration when running neteye satellite setup

Note

Services configured via neteye satellite setup, like NATS Server, must be restarted manually in case of a reboot of the Satellite

Agent Nodes

Icinga2 packages

Icinga2 packages are provided for different OS/distributions via the NetEye repositories for Icinga2 agent installation. Specifically, we support

Debian derivatives:
  • Debian Buster

  • Debian Jessie

  • Debian Stretch

  • Ubuntu Xenial

  • Ubuntu Bionic

  • Ubuntu Eoan

  • Ubuntu Focal

  • Ubuntu Groovy

Red Hat derivatives:
  • CentOS 6

  • CentOS 7

  • CentOS 8

  • Fedora 29

  • Fedora 30

  • Fedora 31

  • Fedora 32

SUSE derivatives:
  • OpenSuse 5.0

  • OpenSuse 15.1

  • SLES 12.4

  • SLES 12.5

  • SLES 15.0

  • SLES 15.1

  • SLES 15.2

and Windows.

Note In order to install Icinga2 packages you need to have the boost libraries installed (version 1.66.0 or newer) or available via the default package manager.

Icinga2 repository versioning

You must use Icinga2 packages provided by the NetEye repositories instead of the official Icinga2 packages. From 4.16 onwards, icinga2 agents are version specific both for the NetEye version and for the monitored operating system version. You can modify package URLs accordingly. If you are downloading packages for 4.<neteye_minor>, you need to change neteye-x.x-icinga2-agents with neteye-4.<neteye_minor>-icinga2-agents in below packages urls.

Add the NetEye repository for Icinga2 packages

This section will explain how to add the dedicated NetEye repository for Icinga2 packages in different OSs and distributions (e.g. Ubuntu, CentOS, SUSE), thus supporting the installation of an Icinga2 agent via the default package manager installed in the OS.

URL repository follow this syntax::

https://repo.wuerth-phoenix.com/<distribution>-<codename_or_version>/neteye-4.<neteye_minor>-icinga2-agents/

Icinga2 RPM repository

To add the repository that provides the Icinga2 RPM packages (e.g. CentOS, SUSE, Fedora) you have to add a new repository definition to your system.

Let us suppose that you need to add the new repository definition on a CentOS 7 machine, which is monitored via NetEye 4.16. You can add the repo definition in a file neteye-icinga2-agent.repo:

[neteye-agent]
name=NetEye Icinga2 Agent Packages
baseurl=https://repo.wuerth-phoenix.com/centos-7/neteye-4.16-icinga2-agents/
gpgcheck=0
enabled=1
priority=1

Please note that the location of this file will change according with the distribution used. For example, on Fedora and CentOS installations the default repo definition directory is /etc/yum.repos.d/, while SUSE will use /etc/zypp/repos.d/.

Once the new repository has been added, you need to load the new repository data by running yum update.

Icinga2 DEB repository

To add the Icinga2 agent repository on Ubuntu or Debian systems you have to create the file neteye-icinga2-agent.list in the directory /etc/apt/sources.list.d/.

For example, to add the repository on a Ubuntu 20.04 Focal Fossa you have to create a file with the following content:

"deb [trusted=yes] https://repo.wuerth-phoenix.com/ubuntu-focal/neteye-4.16-icinga2-agents/ stable main"

Finally, run apt update to update the repo data.

Icinga2 windows packages

Get the Icinga2 Agent for Windows accessing the URL below and downloading the .msi file:

https://repo.wuerth-phoenix.com/windows/neteye-x.x-icinga2-agents/

Install Icinga2

To install Icinga2, follow Icinga2 Documentation Icinga2 requires boost libraries to work properly. Ensure that the libraries are also installed on the system.

To install windows msi on agent, follow Icinga2 Windows Agent Installation official document.

Working with Icinga 2 Agents can be quite tricky, as each Agent needs its own Endpoint and Zone definition, correct parent, peering host and log settings. There may always be reasons for a completely custom-made configuration. However, I’d strongly suggest using the Director- assisted variant. It will save you a lot of headaches.

Preparation

Agent settings are not available for modification directly on a host object. This requires you to create an “Icinga Agent” template. You could name it exactly like that; it’s important to use meaningful names for your templates.

Create an Agent template

Fig. 22 Create an Agent template

As long as you’re not using Satellite nodes, a single Agent zone is all you need. Otherwise, you should create one Agent template per satellite zone. If you want to move an Agent to a specific zone, just assign it the correct template and you’re all done.

Usage

Well, create a host, choose an Agent template, that’s it:

Create an Agent-based host

Fig. 23 Create an Agent-based host

Once you import the “Icinga Agent” template, you’ll see a new “Agent” tab. It tries to assist you with the initial Agent setup by showing a sample config:

Agent instructions 1

Fig. 24 Agent instructions 1

The preview shows that the Icinga Director would deploy multiple objects for your newly created host:

Agent preview

Fig. 25 Agent preview

Create Agent-based services

Similar game for services that should run on your Agents. First, create a template with a meaningful name. Then, define that Services inheriting from this template should run on your Agents.

Agent-based service

Fig. 26 Agent-based service

Please do not set a cluster zone, as this would rarely be necessary. Agent-based services will always be deployed to their Agent’s zone by default. All you need to do now for services that should be executed on your Agents is importing that template:

Agent-based load check

Fig. 27 Agent-based load check

Config preview shows that everything works as expected:

Agent-based service preview

Fig. 28 Agent-based service preview