1 # Distributed Monitoring with Master, Satellites, and Clients <a id="distributed-monitoring"></a>
3 This chapter will guide you through the setup of a distributed monitoring
4 environment, including high-availability clustering and setup details
5 for the Icinga 2 client.
7 ## Roles: Master, Satellites, and Clients <a id="distributed-monitoring-roles"></a>
9 Icinga 2 nodes can be given names for easier understanding:
11 * A `master` node which is on top of the hierarchy.
12 * A `satellite` node which is a child of a `satellite` or `master` node.
13 * A `client` node which works as an `agent` connected to `master` and/or `satellite` nodes.
15 ![Icinga 2 Distributed Roles](images/distributed-monitoring/icinga2_distributed_roles.png)
17 Rephrasing this picture into more details:
19 * A `master` node has no parent node.
20 * A `master`node is where you usually install Icinga Web 2.
21 * A `master` node can combine executed checks from child nodes into backends and notifications.
22 * A `satellite` node has a parent and a child node.
23 * A `satellite` node may execute checks on its own or delegate check execution to child nodes.
24 * A `satellite` node can receive configuration for hosts/services, etc. from the parent node.
25 * A `satellite` node continues to run even if the master node is temporarily unavailable.
26 * A `client` node only has a parent node.
27 * A `client` node will either run its own configured checks or receive command execution events from the parent node.
29 The following sections will refer to these roles and explain the
30 differences and the possibilities this kind of setup offers.
32 **Tip**: If you just want to install a single master node that monitors several hosts
33 (i.e. Icinga 2 clients), continue reading -- we'll start with
35 In case you are planning a huge cluster setup with multiple levels and
36 lots of clients, read on -- we'll deal with these cases later on.
38 The installation on each system is the same: You need to install the
39 [Icinga 2 package](02-installation.md#setting-up-icinga2) and the required [plugins](02-installation.md#setting-up-check-plugins).
41 The required configuration steps are mostly happening
42 on the command line. You can also [automate the setup](06-distributed-monitoring.md#distributed-monitoring-automation).
44 The first thing you need learn about a distributed setup is the hierarchy of the single components.
46 ## Zones <a id="distributed-monitoring-zones"></a>
48 The Icinga 2 hierarchy consists of so-called [zone](09-object-types.md#objecttype-zone) objects.
49 Zones depend on a parent-child relationship in order to trust each other.
51 ![Icinga 2 Distributed Zones](images/distributed-monitoring/icinga2_distributed_zones.png)
53 Have a look at this example for the `satellite` zones which have the `master` zone as a parent zone:
56 object Zone "master" {
60 object Zone "satellite region 1" {
65 object Zone "satellite region 2" {
71 There are certain limitations for child zones, e.g. their members are not allowed
72 to send configuration commands to the parent zone members. Vice versa, the
73 trust hierarchy allows for example the `master` zone to send
74 configuration files to the `satellite` zone. Read more about this
75 in the [security section](06-distributed-monitoring.md#distributed-monitoring-security).
77 `client` nodes also have their own unique zone. By convention you
78 can use the FQDN for the zone name.
80 ## Endpoints <a id="distributed-monitoring-endpoints"></a>
82 Nodes which are a member of a zone are so-called [Endpoint](09-object-types.md#objecttype-endpoint) objects.
84 ![Icinga 2 Distributed Endpoints](images/distributed-monitoring/icinga2_distributed_endpoints.png)
86 Here is an example configuration for two endpoints in different zones:
89 object Endpoint "icinga2-master1.localdomain" {
90 host = "192.168.56.101"
93 object Endpoint "icinga2-satellite1.localdomain" {
94 host = "192.168.56.105"
97 object Zone "master" {
98 endpoints = [ "icinga2-master1.localdomain" ]
101 object Zone "satellite" {
102 endpoints = [ "icinga2-satellite1.localdomain" ]
107 All endpoints in the same zone work as high-availability setup. For
108 example, if you have two nodes in the `master` zone, they will load-balance the check execution.
110 Endpoint objects are important for specifying the connection
111 information, e.g. if the master should actively try to connect to a client.
113 The zone membership is defined inside the `Zone` object definition using
114 the `endpoints` attribute with an array of `Endpoint` names.
118 > There is a known [problem](https://github.com/Icinga/icinga2/issues/3533)
119 > with >2 endpoints in a zone and a message routing loop.
120 > The config validation will log a warning to let you know about this too.
122 If you want to check the availability (e.g. ping checks) of the node
123 you still need a [Host](09-object-types.md#objecttype-host) object.
125 ## ApiListener <a id="distributed-monitoring-apilistener"></a>
127 In case you are using the CLI commands later, you don't have to write
128 this configuration from scratch in a text editor.
129 The [ApiListener](09-object-types.md#objecttype-apilistener) object is
130 used to load the TLS certificates and specify restrictions, e.g.
131 for accepting configuration commands.
133 It is also used for the [Icinga 2 REST API](12-icinga2-api.md#icinga2-api) which shares
134 the same host and port with the Icinga 2 Cluster protocol.
136 The object configuration is stored in the `/etc/icinga2/features-enabled/api.conf`
137 file. Depending on the configuration mode the attributes `accept_commands`
138 and `accept_config` can be configured here.
140 In order to use the `api` feature you need to enable it and restart Icinga 2.
143 icinga2 feature enable api
146 ## Conventions <a id="distributed-monitoring-conventions"></a>
148 By convention all nodes should be configured using their FQDN.
150 Furthermore, you must ensure that the following names
151 are exactly the same in all configuration files:
153 * Host certificate common name (CN).
154 * Endpoint configuration object for the host.
155 * NodeName constant for the local host.
157 Setting this up on the command line will help you to minimize the effort.
158 Just keep in mind that you need to use the FQDN for endpoints and for
159 common names when asked.
161 ## Security <a id="distributed-monitoring-security"></a>
163 While there are certain mechanisms to ensure a secure communication between all
164 nodes (firewalls, policies, software hardening, etc.), Icinga 2 also provides
167 * SSL certificates are mandatory for communication between nodes. The CLI commands
168 help you create those certificates.
169 * Child zones only receive updates (check results, commands, etc.) for their configured objects.
170 * Child zones are not allowed to push configuration updates to parent zones.
171 * Zones cannot interfere with other zones and influence each other. Each checkable host or service object is assigned to **one zone** only.
172 * All nodes in a zone trust each other.
173 * [Config sync](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync) and [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) is disabled by default.
175 The underlying protocol uses JSON-RPC event notifications exchanged by nodes.
176 The connection is secured by TLS. The message protocol uses an internal API,
177 and as such message types and names may change internally and are not documented.
179 Zones build the trust relationship in a distributed environment. If you do not specify
180 a zone for a client and specify the parent zone, its zone members e.g. the master instance
181 won't trust the client.
183 Building this trust is key in your distributed environment. That way the parent node
184 knows that it is able to send messages to the child zone, e.g. configuration objects,
185 configuration in global zones, commands to be executed in this zone/for this endpoint.
186 It also receives check results from the child zone for checkable objects (host/service).
188 Vice versa, the client trusts the master and accepts configuration and commands if enabled
189 in the api feature. If the client would send configuration to the parent zone, the parent nodes
190 will deny it. The parent zone is the configuration entity, and does not trust clients in this matter.
191 A client could attempt to modify a different client for example, or inject a check command
194 While it may sound complicated for client setups, it removes the problem with different roles
195 and configurations for a master and a client. Both of them work the same way, are configured
196 in the same way (Zone, Endpoint, ApiListener), and you can troubleshoot and debug them in just one go.
198 ## Versions and Upgrade <a id="distributed-monitoring-versions-upgrade"></a>
200 It generally is advised to use the newest releases with the same version on all instances.
201 Prior to upgrading, make sure to plan a maintenance window.
203 The Icinga project aims to allow the following compatibility:
206 master (2.11) >= satellite (2.10) >= clients (2.9)
209 Older client versions may work, but there's no guarantee. Always keep in mind that
210 older versions are out of support and can contain bugs.
212 In terms of an upgrade, ensure that the master is upgraded first, then
213 involved satellites, and last the Icinga 2 clients. If you are on v2.10
214 currently, first upgrade the master instance(s) to 2.11, and then proceed
215 with the satellites. Things are getting easier with any sort of automation
216 tool (Puppet, Ansible, etc.).
218 Releases and new features may require you to upgrade master/satellite instances at once,
219 this is highlighted in the [upgrading docs](16-upgrading-icinga-2.md#upgrading-icinga-2) if needed.
220 One example is the CA Proxy and on-demand signing feature
221 available since v2.8 where all involved instances need this version
222 to function properly.
224 ## Master Setup <a id="distributed-monitoring-setup-master"></a>
226 This section explains how to install a central single master node using
227 the `node wizard` command. If you prefer to do an automated installation, please
228 refer to the [automated setup](06-distributed-monitoring.md#distributed-monitoring-automation) section.
230 Install the [Icinga 2 package](02-installation.md#setting-up-icinga2) and setup
231 the required [plugins](02-installation.md#setting-up-check-plugins) if you haven't done
234 **Note**: Windows is not supported for a master node setup.
236 The next step is to run the `node wizard` CLI command. Prior to that
237 ensure to collect the required information:
239 Parameter | Description
240 --------------------|--------------------
241 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
242 Master zone name | **Optional.** Allows to specify the master zone name. Defaults to `master`.
243 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
244 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
245 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
246 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `y`. Configuration on the master is discussed below.
248 The setup wizard will ensure that the following steps are taken:
250 * Enable the `api` feature.
251 * Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if it doesn't exist.
252 * Create a certificate for this node signed by the CA key.
253 * Update the [zones.conf](04-configuration.md#zones-conf) file with the new zone hierarchy.
254 * Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuration.md#constants-conf) configuration.
255 * Update the [icinga2.conf](04-configuration.md#icinga2-conf) to disable the `conf.d` inclusion, and add the `api-users.conf` file inclusion.
257 Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
260 [root@icinga2-master1.localdomain /]# icinga2 node wizard
262 Welcome to the Icinga 2 Setup Wizard!
264 We will guide you through all required configuration details.
266 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]: n
268 Starting the Master setup routine...
270 Please specify the common name (CN) [icinga2-master1.localdomain]: icinga2-master1.localdomain
271 Reconfiguring Icinga...
272 Checking for existing certificates for common name 'icinga2-master1.localdomain'...
273 Certificates not yet generated. Running 'api setup' now.
274 Generating master configuration for Icinga 2.
275 Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
277 Master zone name [master]:
279 Default global zones: global-templates director-global
280 Do you want to specify additional global zones? [y/N]: N
282 Please specify the API bind host/port (optional):
286 Do you want to disable the inclusion of the conf.d directory [Y/n]:
287 Disabling the inclusion of the conf.d directory...
288 Checking if the api-users.conf file exists...
292 Now restart your Icinga 2 daemon to finish the installation!
295 You can verify that the CA public and private keys are stored in the `/var/lib/icinga2/ca` directory.
296 Keep this path secure and include it in your [backups](02-installation.md#install-backup).
298 In case you lose the CA private key you have to generate a new CA for signing new client
299 certificate requests. You then have to also re-create new signed certificates for all
302 Once the master setup is complete, you can also use this node as primary [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing)
303 master. The following section will explain how to use the CLI commands in order to fetch their
304 signed certificate from this master node.
306 ## Signing Certificates on the Master <a id="distributed-monitoring-setup-sign-certificates-master"></a>
308 All certificates must be signed by the same certificate authority (CA). This ensures
309 that all nodes trust each other in a distributed monitoring environment.
311 This CA is generated during the [master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master)
312 and should be the same on all master instances.
314 You can avoid signing and deploying certificates [manually](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
315 by using built-in methods for auto-signing certificate signing requests (CSR):
317 * [CSR Auto-Signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) which uses a client ticket generated on the master as trust identifier.
318 * [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) which allows to sign pending certificate requests on the master.
320 Both methods are described in detail below.
324 > [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) is available in Icinga 2 v2.8+.
326 ### CSR Auto-Signing <a id="distributed-monitoring-setup-csr-auto-signing"></a>
328 A client which sends a certificate signing request (CSR) must authenticate itself
329 in a trusted way. The master generates a client ticket which is included in this request.
330 That way the master can verify that the request matches the previously trusted ticket
331 and sign the request.
335 > Icinga 2 v2.8 added the possibility to forward signing requests on a satellite
336 > to the master node. This is called `CA Proxy` in blog posts and design drafts.
337 > This functionality helps with the setup of [three level clusters](#06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
342 * Nodes can be installed by different users who have received the client ticket.
343 * No manual interaction necessary on the master node.
344 * Automation tools like Puppet, Ansible, etc. can retrieve the pre-generated ticket in their client catalog
345 and run the node setup directly.
349 * Tickets need to be generated on the master and copied to client setup wizards.
350 * No central signing management.
353 Setup wizards for satellite/client nodes will ask you for this specific client ticket.
355 There are two possible ways to retrieve the ticket:
357 * [CLI command](11-cli-commands.md#cli-command-pki) executed on the master node.
358 * [REST API](12-icinga2-api.md#icinga2-api) request against the master node.
360 Required information:
362 Parameter | Description
363 --------------------|--------------------
364 Common name (CN) | **Required.** The common name for the satellite/client. By convention this should be the FQDN.
366 The following example shows how to generate a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
369 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
372 Querying the [Icinga 2 API](12-icinga2-api.md#icinga2-api) on the master requires an [ApiUser](12-icinga2-api.md#icinga2-api-authentication)
373 object with at least the `actions/generate-ticket` permission.
376 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/conf.d/api-users.conf
378 object ApiUser "client-pki-ticket" {
379 password = "bea11beb7b810ea9ce6ea" //change this
380 permissions = [ "actions/generate-ticket" ]
383 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
385 Retrieve the ticket on the master node `icinga2-master1.localdomain` with `curl`, for example:
387 [root@icinga2-master1.localdomain /]# curl -k -s -u client-pki-ticket:bea11beb7b810ea9ce6ea -H 'Accept: application/json' \
388 -X POST 'https://localhost:5665/v1/actions/generate-ticket' -d '{ "cn": "icinga2-client1.localdomain" }'
391 Store that ticket number for the satellite/client setup below.
395 > Never expose the ticket salt and/or ApiUser credentials to your client nodes.
396 > Example: Retrieve the ticket on the Puppet master node and send the compiled catalog
397 > to the authorized Puppet agent node which will invoke the
398 > [automated setup steps](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup).
400 ### On-Demand CSR Signing <a id="distributed-monitoring-setup-on-demand-csr-signing"></a>
402 The client sends a certificate signing request to specified parent node without any
403 ticket. The admin on the master is responsible for reviewing and signing the requests
404 with the private CA key.
406 This could either be directly the master, or a satellite which forwards the request
407 to the signing master.
411 * Central certificate request signing management.
412 * No pre-generated ticket is required for client setups.
416 * Asynchronous step for automated deployments.
417 * Needs client verification on the master.
420 You can list pending certificate signing requests with the `ca list` CLI command.
423 [root@icinga2-master1.localdomain /]# icinga2 ca list
424 Fingerprint | Timestamp | Signed | Subject
425 -----------------------------------------------------------------|---------------------|--------|--------
426 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
429 In order to show all requests, use the `--all` parameter.
432 [root@icinga2-master1.localdomain /]# icinga2 ca list --all
433 Fingerprint | Timestamp | Signed | Subject
434 -----------------------------------------------------------------|---------------------|--------|--------
435 403da5b228df384f07f980f45ba50202529cded7c8182abf96740660caa09727 | 2017/09/06 17:02:40 | * | CN = icinga2-client1.localdomain
436 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
439 **Tip**: Add `--json` to the CLI command to retrieve the details in JSON format.
441 If you want to sign a specific request, you need to use the `ca sign` CLI command
442 and pass its fingerprint as argument.
445 [root@icinga2-master1.localdomain /]# icinga2 ca sign 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850
446 information/cli: Signed certificate for 'CN = icinga2-client2.localdomain'.
451 > `ca list` cannot be used as historical inventory. Certificate
452 > signing requests older than 1 week are automatically deleted.
454 You can also remove an undesired CSR using the `ca remove` command using the
455 syntax as the `ca sign` command.
458 [root@pym ~]# icinga2 ca remove 5c31ca0e2269c10363a97e40e3f2b2cd56493f9194d5b1852541b835970da46e
459 information/cli: Certificate 5c31ca0e2269c10363a97e40e3f2b2cd56493f9194d5b1852541b835970da46e removed.
461 If you want to restore a certificate you have removed, you can use `ca restore`.
464 ## Client/Satellite Setup <a id="distributed-monitoring-setup-satellite-client"></a>
466 This section describes the setup of a satellite and/or client connected to an
467 existing master node setup. If you haven't done so already, please [run the master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master).
469 Icinga 2 on the master node must be running and accepting connections on port `5665`.
472 ### Client/Satellite Setup on Linux <a id="distributed-monitoring-setup-client-linux"></a>
474 Please ensure that you've run all the steps mentioned in the [client/satellite section](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
476 Install the [Icinga 2 package](02-installation.md#setting-up-icinga2) and setup
477 the required [plugins](02-installation.md#setting-up-check-plugins) if you haven't done
480 The next step is to run the `node wizard` CLI command.
482 In this example we're generating a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
485 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
486 4f75d2ecd253575fe9180938ebff7cbca262f96e
489 Note: You don't need this step if you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
491 Start the wizard on the client `icinga2-client1.localdomain`:
494 [root@icinga2-client1.localdomain /]# icinga2 node wizard
496 Welcome to the Icinga 2 Setup Wizard!
498 We will guide you through all required configuration details.
501 Press `Enter` or add `y` to start a satellite or client setup.
504 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]:
507 Press `Enter` to use the proposed name in brackets, or add a specific common name (CN). By convention
508 this should be the FQDN.
511 Starting the Client/Satellite setup routine...
513 Please specify the common name (CN) [icinga2-client1.localdomain]: icinga2-client1.localdomain
516 Specify the direct parent for this node. This could be your primary master `icinga2-master1.localdomain`
517 or a satellite node in a multi level cluster scenario.
520 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
521 Master/Satellite Common Name (CN from your master/satellite node): icinga2-master1.localdomain
524 Press `Enter` or choose `y` to establish a connection to the parent node.
527 Do you want to establish a connection to the parent node from this node? [Y/n]:
532 > If this node cannot connect to the parent node, choose `n`. The setup
533 > wizard will provide instructions for this scenario -- signing questions are disabled then.
535 Add the connection details for `icinga2-master1.localdomain`.
538 Please specify the master/satellite connection information:
539 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.101
540 Master/Satellite endpoint port [5665]: 5665
543 You can add more parent nodes if necessary. Press `Enter` or choose `n`
544 if you don't want to add any. This comes in handy if you have more than one
545 parent node, e.g. two masters or two satellites.
548 Add more master/satellite endpoints? [y/N]:
551 Verify the parent node's certificate:
554 Parent certificate information:
556 Subject: CN = icinga2-master1.localdomain
557 Issuer: CN = Icinga CA
558 Valid From: Sep 7 13:41:24 2017 GMT
559 Valid Until: Sep 3 13:41:24 2032 GMT
560 Fingerprint: AC 99 8B 2B 3D B0 01 00 E5 21 FA 05 2E EC D5 A9 EF 9E AA E3
562 Is this information correct? [y/N]: y
565 The setup wizard fetches the parent node's certificate and ask
566 you to verify this information. This is to prevent MITM attacks or
567 any kind of untrusted parent relationship.
569 Note: The certificate is not fetched if you have chosen not to connect
572 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
575 Please specify the request ticket generated on your Icinga 2 master (optional).
576 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
577 4f75d2ecd253575fe9180938ebff7cbca262f96e
580 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
581 you can leave the ticket question blank.
583 Instead, Icinga 2 tells you to approve the request later on the master node.
586 No ticket was specified. Please approve the certificate signing request manually
587 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
590 You can optionally specify a different bind host and/or port.
593 Please specify the API bind host/port (optional):
598 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
599 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
602 Accept config from parent node? [y/N]: y
603 Accept commands from parent node? [y/N]: y
606 Next you can optionally specify the local and parent zone names. This will be reflected
607 in the generated zone configuration file.
609 Set the local zone name to something else, if you are installing a satellite or secondary master instance.
612 Local zone name [icinga2-client1.localdomain]:
615 Set the parent zone name to something else than `master` if this client connects to a satellite instance instead of the master.
618 Parent zone name [master]:
621 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
622 Press `Enter` or choose `n`, if you don't want to add any additional.
625 Reconfiguring Icinga...
627 Default global zones: global-templates director-global
628 Do you want to specify additional global zones? [y/N]: N
631 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
632 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
633 they receive configuration synced from the parent zone.
636 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
637 Disabling the inclusion of the conf.d directory...
641 The wizard proceeds and you are good to go.
646 Now restart your Icinga 2 daemon to finish the installation!
651 > If you have chosen not to connect to the parent node, you cannot start
652 > Icinga 2 yet. The wizard asked you to manually copy the master's public
653 > CA certificate file into `/var/lib/icinga2/certs/ca.crt`.
655 > You need to manually sign the CSR on the master node.
657 Restart Icinga 2 as requested.
660 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
663 Here is an overview of all parameters in detail:
665 Parameter | Description
666 --------------------|--------------------
667 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
668 Master common name | **Required.** Use the common name you've specified for your master node before.
669 Establish connection to the parent node | **Optional.** Whether the node should attempt to connect to the parent node or not. Defaults to `y`.
670 Master/Satellite endpoint host | **Required if the the client needs to connect to the master/satellite.** The parent endpoint's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
671 Master/Satellite endpoint port | **Optional if the the client needs to connect to the master/satellite.** The parent endpoints's listening port. This information is included in the `Endpoint` object configuration.
672 Add more master/satellite endpoints | **Optional.** If you have multiple master/satellite nodes configured, add them here.
673 Parent Certificate information | **Required.** Verify that the connecting host really is the requested master node.
674 Request ticket | **Optional.** Add the [ticket](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) generated on the master.
675 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
676 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
677 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
678 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
679 Local zone name | **Optional.** Allows to specify the name for the local zone. This comes in handy when this instance is a satellite, not a client. Defaults to the FQDN.
680 Parent zone name | **Optional.** Allows to specify the name for the parent zone. This is important if the client has a satellite instance as parent, not the master. Defaults to `master`.
681 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
682 Disable conf.d | **Optional.** Allows to disable the inclusion of the `conf.d` directory which holds local example configuration. Clients should retrieve their configuration from the parent node, or act as command endpoint execution bridge. Defaults to `y`.
684 The setup wizard will ensure that the following steps are taken:
686 * Enable the `api` feature.
687 * Create a certificate signing request (CSR) for the local node.
688 * Request a signed certificate i(optional with the provided ticket number) on the master node.
689 * Allow to verify the parent node's certificate.
690 * Store the signed client certificate and ca.crt in `/var/lib/icinga2/certs`.
691 * Update the `zones.conf` file with the new zone hierarchy.
692 * Update `/etc/icinga2/features-enabled/api.conf` (`accept_config`, `accept_commands`) and `constants.conf`.
693 * Update `/etc/icinga2/icinga2.conf` and comment out `include_recursive "conf.d"`.
695 You can verify that the certificate files are stored in the `/var/lib/icinga2/certs` directory.
699 > If the client is not directly connected to the certificate signing master,
700 > signing requests and responses might need some minutes to fully update the client certificates.
702 > If you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
703 > certificates need to be signed on the master first. Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
705 Now that you've successfully installed a Linux/Unix satellite/client instance, please proceed to
706 the [configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
710 ### Client Setup on Windows <a id="distributed-monitoring-setup-client-windows"></a>
712 Download the MSI-Installer package from [https://packages.icinga.com/windows/](https://packages.icinga.com/windows/).
716 * Windows Vista/Server 2008 or higher
717 * Versions older than Windows 10/Server 2016 require the [Universal C Runtime for Windows](https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows)
718 * [Microsoft .NET Framework 4.6] or higher (https://www.microsoft.com/en-US/download/details.aspx?id=53344) for the setup wizard
720 The installer package includes the [NSClient++](https://www.nsclient.org/) package
721 so that Icinga 2 can use its built-in plugins. You can find more details in
722 [this chapter](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
723 The Windows package also installs native [monitoring plugin binaries](06-distributed-monitoring.md#distributed-monitoring-windows-plugins)
724 to get you started more easily.
728 > Please note that Icinga 2 was designed to run as light-weight client on Windows.
729 > There is no support for satellite instances.
731 #### Windows Client Setup Start <a id="distributed-monitoring-setup-client-windows-start"></a>
733 Run the MSI-Installer package and follow the instructions shown in the screenshots.
735 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_01.png)
736 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_02.png)
737 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_03.png)
738 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_04.png)
739 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_05.png)
741 The graphical installer offers to run the Icinga 2 setup wizard after the installation. Select
742 the check box to proceed.
746 > You can also run the Icinga 2 setup wizard from the Start menu later.
748 On a fresh installation the setup wizard guides you through the initial configuration.
749 It also provides a mechanism to send a certificate request to the [CSR signing master](distributed-monitoring-setup-sign-certificates-master).
751 The following configuration details are required:
753 Parameter | Description
754 --------------------|--------------------
755 Instance name | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
756 Setup ticket | **Optional.** Paste the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing). If left blank, the certificate request must be [signed on the master node](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
758 Fill in the required information and click `Add` to add a new master connection.
760 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_01.png)
762 Add the following details:
764 Parameter | Description
765 -------------------------------|-------------------------------
766 Instance name | **Required.** The master/satellite endpoint name where this client is a direct child of.
767 Master/Satellite endpoint host | **Required.** The master or satellite's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
768 Master/Satellite endpoint port | **Optional.** The master or satellite's listening port. This information is included in the `Endpoint` object configuration.
770 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02.png)
772 When needed you can add an additional global zone (the zones `global-templates` and `director-global` are added by default):
774 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02_global_zone.png)
776 Optionally enable the following settings:
778 Parameter | Description
779 ----------------------------------|----------------------------------
780 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
781 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
782 Run Icinga 2 service as this user | **Optional.** Specify a different Windows user. This defaults to `NT AUTHORITY\Network Service` and is required for more privileged service checks.
783 Install NSClient++ | **Optional.** The Windows installer bundles the NSClient++ installer for additional [plugin checks](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
784 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `true`.
786 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_03.png)
788 Verify the certificate from the master/satellite instance where this node should connect to.
790 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_04.png)
793 #### Bundled NSClient++ Setup <a id="distributed-monitoring-setup-client-windows-nsclient"></a>
795 If you have chosen to install/update the NSClient++ package, the Icinga 2 setup wizard asks
798 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_01.png)
800 Choose the `Generic` setup.
802 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_02.png)
804 Choose the `Custom` setup type.
806 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_03.png)
808 NSClient++ does not install a sample configuration by default. Change this as shown in the screenshot.
810 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_04.png)
812 Generate a secure password and enable the web server module. **Note**: The webserver module is
813 available starting with NSClient++ 0.5.0. Icinga 2 v2.6+ is required which includes this version.
815 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_05.png)
817 Finish the installation.
819 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_06.png)
821 Open a web browser and navigate to `https://localhost:8443`. Enter the password you've configured
822 during the setup. In case you lost it, look into the `C:\Program Files\NSClient++\nsclient.ini`
825 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_07.png)
827 The NSClient++ REST API can be used to query metrics. [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
828 uses this transport method.
831 #### Finish Windows Client Setup <a id="distributed-monitoring-setup-client-windows-finish"></a>
833 Finish the Windows setup wizard.
835 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_with_ticket.png)
837 If you did not provide a setup ticket, you need to sign the certificate request on the master.
838 The setup wizards tells you to do so. The Icinga 2 service is running at this point already
839 and will automatically receive and update a signed client certificate.
841 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_no_ticket.png)
843 Icinga 2 is automatically started as a Windows service.
845 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_running_service.png)
847 The Icinga 2 configuration is stored inside the `C:\ProgramData\icinga2` directory.
848 Click `Examine Config` in the setup wizard to open a new Explorer window.
850 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_examine_config.png)
852 The configuration files can be modified with your favorite editor e.g. Notepad.
854 In order to use the [top down](06-distributed-monitoring.md#distributed-monitoring-top-down) client
855 configuration prepare the following steps.
857 You don't need any local configuration on the client except for
858 CheckCommand definitions which can be synced using the global zone
859 above. Therefore disable the inclusion of the `conf.d` directory
860 in the `icinga2.conf` file.
861 Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
862 the `icinga2.conf` file in your preferred editor. Remove or comment (`//`)
866 // Commented out, not required on a client with top down mode
867 //include_recursive "conf.d"
872 > Packages >= 2.9 provide an option in the setup wizard to disable this.
873 > Defaults to disabled.
875 Validate the configuration on Windows open an administrator terminal
876 and run the following command:
879 C:\WINDOWS\system32>cd "C:\Program Files\ICINGA2\sbin"
880 C:\Program Files\ICINGA2\sbin>icinga2.exe daemon -C
883 **Note**: You have to run this command in a shell with `administrator` privileges.
885 Now you need to restart the Icinga 2 service. Run `services.msc` from the start menu
886 and restart the `icinga2` service. Alternatively, you can use the `net {start,stop}` CLI commands.
888 ![Icinga 2 Windows Service Start/Stop](images/distributed-monitoring/icinga2_windows_cmd_admin_net_start_stop.png)
890 Now that you've successfully installed a Windows client, please proceed to
891 the [detailed configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
893 ## Configuration Modes <a id="distributed-monitoring-configuration-modes"></a>
895 There are different ways to ensure that the Icinga 2 cluster nodes execute
896 checks, send notifications, etc.
898 The preferred method is to configure monitoring objects on the master
899 and distribute the configuration to satellites and clients.
901 The following chapters will explain this in detail with hands-on manual configuration
902 examples. You should test and implement this once to fully understand how it works.
904 Once you are familiar with Icinga 2 and distributed monitoring, you
905 can start with additional integrations to manage and deploy your
908 * [Icinga Director](https://github.com/icinga/icingaweb2-module-director) provides a web interface to manage configuration and also allows to sync imported resources (CMDB, PuppetDB, etc.)
909 * [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
910 * [Puppet Module](https://github.com/Icinga/puppet-icinga2)
911 * [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
913 More details can be found [here](13-addons.md#configuration-tools).
915 ### Top Down <a id="distributed-monitoring-top-down"></a>
917 There are two different behaviors with check execution:
919 * Send a command execution event remotely: The scheduler still runs on the parent node.
920 * Sync the host/service objects directly to the child node: Checks are executed locally.
922 Again, technically it does not matter whether this is a `client` or a `satellite`
923 which is receiving configuration or command execution events.
925 ### Top Down Command Endpoint <a id="distributed-monitoring-top-down-command-endpoint"></a>
927 This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
928 The host/service object configuration is located on the master/satellite and the client only
929 needs the CheckCommand object definitions being used there.
931 Every endpoint has its own remote check queue. The amount of checks executed simultaneously
932 can be limited on the endpoint with the `MaxConcurrentChecks` constant defined in [constants.conf](04-configuration.md#constants-conf). Icinga 2 may discard check requests,
933 if the remote check queue is full.
935 ![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
939 * No local checks need to be defined on the child node (client).
940 * Light-weight remote check execution (asynchronous events).
941 * No [replay log](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-command-endpoint-log-duration) is necessary for the child node.
942 * Pin checks to specific endpoints (if the child zone consists of 2 endpoints).
946 * If the child node is not connected, no more checks are executed.
947 * Requires additional configuration attribute specified in host/service objects.
948 * Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
950 To make sure that all nodes involved will accept configuration and/or
951 commands, you need to configure the `Zone` and `Endpoint` hierarchy
954 * `icinga2-master1.localdomain` is the configuration master in this scenario.
955 * `icinga2-client1.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition, it receives the global check command configuration from the master.
957 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
959 The endpoint configuration could look like this, for example:
962 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
964 object Endpoint "icinga2-master1.localdomain" {
965 host = "192.168.56.101"
968 object Endpoint "icinga2-client1.localdomain" {
969 host = "192.168.56.111"
973 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
975 **Note**: Each client requires its own zone and endpoint configuration. Best practice
976 is to use the client's FQDN for all object names.
978 The `master` zone is a parent of the `icinga2-client1.localdomain` zone:
981 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
983 object Zone "master" {
984 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
987 object Zone "icinga2-client1.localdomain" {
988 endpoints = [ "icinga2-client1.localdomain" ]
990 parent = "master" //establish zone hierarchy
994 You don't need any local configuration on the client except for
995 CheckCommand definitions which can be synced using the global zone
996 above. Therefore disable the inclusion of the `conf.d` directory
997 in `/etc/icinga2/icinga2.conf`.
1000 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/icinga2.conf
1002 // Commented out, not required on a client as command endpoint
1003 //include_recursive "conf.d"
1008 > Packages >= 2.9 provide an option in the setup wizard to disable this.
1009 > Defaults to disabled.
1011 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1014 Example on CentOS 7:
1017 [root@icinga2-client1.localdomain /]# icinga2 daemon -C
1018 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
1020 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1021 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1024 Once the clients have successfully connected, you are ready for the next step: **execute
1025 a remote check on the client using the command endpoint**.
1027 Include the host and service object configuration in the `master` zone
1028 -- this will help adding a secondary master for high-availability later.
1031 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1034 Add the host and service objects you want to monitor. There is
1035 no limitation for files and directories -- best practice is to
1036 sort things by type.
1038 By convention a master/satellite/client host object should use the same name as the endpoint object.
1039 You can also add multiple hosts which execute checks against remote services/clients.
1042 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1043 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1045 object Host "icinga2-client1.localdomain" {
1046 check_command = "hostalive" //check is executed on the master
1047 address = "192.168.56.111"
1049 vars.client_endpoint = name //follows the convention that host name == endpoint name
1053 Given that you are monitoring a Linux client, we'll add a remote [disk](10-icinga-template-library.md#plugin-check-command-disk)
1057 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1059 apply Service "disk" {
1060 check_command = "disk"
1062 //specify where the check is executed
1063 command_endpoint = host.vars.client_endpoint
1065 assign where host.vars.client_endpoint
1069 If you have your own custom `CheckCommand` definition, add it to the global zone:
1072 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1073 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1075 object CheckCommand "my-cmd" {
1080 Save the changes and validate the configuration on the master node:
1083 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1085 Restart the Icinga 2 daemon (example for CentOS 7):
1088 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1091 The following steps will happen:
1093 * Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
1094 * The `icinga2-master1.localdomain` node schedules and executes the checks.
1095 * The `icinga2-client1.localdomain` node receives the execute command event with additional command parameters.
1096 * The `icinga2-client1.localdomain` node maps the command parameters to the local check command, executes the check locally, and sends back the check result message.
1098 As you can see, no interaction from your side is required on the client itself, and it's not necessary to reload the Icinga 2 service on the client.
1100 You have learned the basics about command endpoint checks. Proceed with
1101 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1102 section where you can find detailed information on extending the setup.
1105 ### Top Down Config Sync <a id="distributed-monitoring-top-down-config-sync"></a>
1107 This mode syncs the object configuration files within specified zones.
1108 It comes in handy if you want to configure everything on the master node
1109 and sync the satellite checks (disk, memory, etc.). The satellites run their
1110 own local scheduler and will send the check result messages back to the master.
1112 ![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
1116 * Sync the configuration files from the parent zone to the child zones.
1117 * No manual restart is required on the child nodes, as syncing, validation, and restarts happen automatically.
1118 * Execute checks directly on the child node's scheduler.
1119 * Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
1120 * Use a global zone for syncing templates, groups, etc.
1124 * Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
1125 * Additional zone and endpoint configuration needed.
1126 * Replay log is replicated on reconnect after connection loss. This might increase the data transfer and create an overload on the connection.
1128 To make sure that all involved nodes accept configuration and/or
1129 commands, you need to configure the `Zone` and `Endpoint` hierarchy
1132 * `icinga2-master1.localdomain` is the configuration master in this scenario.
1133 * `icinga2-client2.localdomain` acts as client which receives configuration from the master. Checks are scheduled locally.
1135 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
1137 The endpoint configuration could look like this:
1140 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1142 object Endpoint "icinga2-master1.localdomain" {
1143 host = "192.168.56.101"
1146 object Endpoint "icinga2-client2.localdomain" {
1147 host = "192.168.56.112"
1151 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
1153 **Note**: Each client requires its own zone and endpoint configuration. Best practice
1154 is to use the client's FQDN for all object names.
1156 The `master` zone is a parent of the `icinga2-client2.localdomain` zone:
1159 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1161 object Zone "master" {
1162 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
1165 object Zone "icinga2-client2.localdomain" {
1166 endpoints = [ "icinga2-client2.localdomain" ]
1168 parent = "master" //establish zone hierarchy
1172 Edit the `api` feature on the client `icinga2-client2.localdomain` in
1173 the `/etc/icinga2/features-enabled/api.conf` file and set
1174 `accept_config` to `true`.
1177 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
1179 object ApiListener "api" {
1181 accept_config = true
1185 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1188 Example on CentOS 7:
1191 [root@icinga2-client2.localdomain /]# icinga2 daemon -C
1192 [root@icinga2-client2.localdomain /]# systemctl restart icinga2
1194 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1195 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1198 **Tip**: Best practice is to use a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
1199 for common configuration items (check commands, templates, groups, etc.).
1201 Once the clients have connected successfully, it's time for the next step: **execute
1202 a local check on the client using the configuration sync**.
1204 Navigate to `/etc/icinga2/zones.d` on your master node
1205 `icinga2-master1.localdomain` and create a new directory with the same
1206 name as your satellite/client zone name:
1209 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/icinga2-client2.localdomain
1212 Add the host and service objects you want to monitor. There is
1213 no limitation for files and directories -- best practice is to
1214 sort things by type.
1216 By convention a master/satellite/client host object should use the same name as the endpoint object.
1217 You can also add multiple hosts which execute checks against remote services/clients.
1220 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/icinga2-client2.localdomain
1221 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim hosts.conf
1223 object Host "icinga2-client2.localdomain" {
1224 check_command = "hostalive"
1225 address = "192.168.56.112"
1226 zone = "master" //optional trick: sync the required host object to the client, but enforce the "master" zone to execute the check
1230 Given that you are monitoring a Linux client we'll just add a local [disk](10-icinga-template-library.md#plugin-check-command-disk)
1234 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim services.conf
1236 object Service "disk" {
1237 host_name = "icinga2-client2.localdomain"
1239 check_command = "disk"
1243 Save the changes and validate the configuration on the master node:
1246 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1249 Restart the Icinga 2 daemon (example for CentOS 7):
1252 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1255 The following steps will happen:
1257 * Icinga 2 validates the configuration on `icinga2-master1.localdomain`.
1258 * Icinga 2 copies the configuration into its zone config store in `/var/lib/icinga2/api/zones`.
1259 * The `icinga2-master1.localdomain` node sends a config update event to all endpoints in the same or direct child zones.
1260 * The `icinga2-client2.localdomain` node accepts config and populates the local zone config store with the received config files.
1261 * The `icinga2-client2.localdomain` node validates the configuration and automatically restarts.
1263 Again, there is no interaction required on the client
1266 You can also use the config sync inside a high-availability zone to
1267 ensure that all config objects are synced among zone members.
1269 **Note**: You can only have one so-called "config master" in a zone which stores
1270 the configuration in the `zones.d` directory.
1271 Multiple nodes with configuration files in the `zones.d` directory are
1274 Now that you've learned the basics about the configuration sync, proceed with
1275 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1276 section where you can find detailed information on extending the setup.
1280 If you are eager to start fresh instead you might take a look into the
1281 [Icinga Director](https://icinga.com/docs/director/latest/).
1283 ## Scenarios <a id="distributed-monitoring-scenarios"></a>
1285 The following examples should give you an idea on how to build your own
1286 distributed monitoring environment. We've seen them all in production
1287 environments and received feedback from our [community](https://community.icinga.com/)
1288 and [partner support](https://icinga.com/support/) channels:
1290 * [Single master with client](06-distributed-monitoring.md#distributed-monitoring-master-clients).
1291 * [HA master with clients as command endpoint](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients)
1292 * [Three level cluster](06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client) with config HA masters, satellites receiving config sync, and clients checked using command endpoint.
1294 You can also extend the cluster tree depth to four levels e.g. with 2 satellite levels.
1295 Just keep in mind that multiple levels become harder to debug in case of errors.
1297 You can also start with a single master setup, and later add a secondary
1298 master endpoint. This requires an extra step with the [initial sync](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-initial-sync)
1299 for cloning the runtime state. This is described in detail [here](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
1301 ### Master with Clients <a id="distributed-monitoring-master-clients"></a>
1303 In this scenario, a single master node runs the check scheduler, notifications
1304 and IDO database backend and uses the [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
1305 to execute checks on the remote clients.
1307 ![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
1309 * `icinga2-master1.localdomain` is the primary master node.
1310 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1314 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1315 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1317 Edit the `zones.conf` configuration file on the master:
1320 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1322 object Endpoint "icinga2-master1.localdomain" {
1325 object Endpoint "icinga2-client1.localdomain" {
1326 host = "192.168.56.111" //the master actively tries to connect to the client
1329 object Endpoint "icinga2-client2.localdomain" {
1330 host = "192.168.56.112" //the master actively tries to connect to the client
1333 object Zone "master" {
1334 endpoints = [ "icinga2-master1.localdomain" ]
1337 object Zone "icinga2-client1.localdomain" {
1338 endpoints = [ "icinga2-client1.localdomain" ]
1343 object Zone "icinga2-client2.localdomain" {
1344 endpoints = [ "icinga2-client2.localdomain" ]
1349 /* sync global commands */
1350 object Zone "global-templates" {
1355 The two client nodes do not necessarily need to know about each other. The only important thing
1356 is that they know about the parent zone and their endpoint members (and optionally the global zone).
1358 If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object,
1359 the client will actively try to connect to the master node. Since we've specified the client
1360 endpoint's attribute on the master node already, we don't want the clients to connect to the
1361 master. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1364 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1366 object Endpoint "icinga2-master1.localdomain" {
1367 //do not actively connect to the master by leaving out the 'host' attribute
1370 object Endpoint "icinga2-client1.localdomain" {
1373 object Zone "master" {
1374 endpoints = [ "icinga2-master1.localdomain" ]
1377 object Zone "icinga2-client1.localdomain" {
1378 endpoints = [ "icinga2-client1.localdomain" ]
1383 /* sync global commands */
1384 object Zone "global-templates" {
1388 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1390 object Endpoint "icinga2-master1.localdomain" {
1391 //do not actively connect to the master by leaving out the 'host' attribute
1394 object Endpoint "icinga2-client2.localdomain" {
1397 object Zone "master" {
1398 endpoints = [ "icinga2-master1.localdomain" ]
1401 object Zone "icinga2-client2.localdomain" {
1402 endpoints = [ "icinga2-client2.localdomain" ]
1407 /* sync global commands */
1408 object Zone "global-templates" {
1413 Now it is time to define the two client hosts and apply service checks using
1414 the command endpoint execution method on them. Note: You can also use the
1415 config sync mode here.
1417 Create a new configuration directory on the master node:
1420 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1423 Add the two client nodes as host objects:
1426 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1427 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1429 object Host "icinga2-client1.localdomain" {
1430 check_command = "hostalive"
1431 address = "192.168.56.111"
1432 vars.client_endpoint = name //follows the convention that host name == endpoint name
1435 object Host "icinga2-client2.localdomain" {
1436 check_command = "hostalive"
1437 address = "192.168.56.112"
1438 vars.client_endpoint = name //follows the convention that host name == endpoint name
1442 Add services using command endpoint checks:
1445 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1447 apply Service "ping4" {
1448 check_command = "ping4"
1449 //check is executed on the master node
1450 assign where host.address
1453 apply Service "disk" {
1454 check_command = "disk"
1456 //specify where the check is executed
1457 command_endpoint = host.vars.client_endpoint
1459 assign where host.vars.client_endpoint
1463 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1466 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1467 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1470 Open Icinga Web 2 and check the two newly created client hosts with two new services
1471 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1474 ### High-Availability Master with Clients <a id="distributed-monitoring-scenarios-ha-master-clients"></a>
1476 This scenario is similar to the one in the [previous section](06-distributed-monitoring.md#distributed-monitoring-master-clients). The only difference is that we will now set up two master nodes in a high-availability setup.
1477 These nodes must be configured as zone and endpoints objects.
1479 ![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
1481 The setup uses the capabilities of the Icinga 2 cluster. All zone members
1482 replicate cluster events amongst each other. In addition to that, several Icinga 2
1483 features can enable [HA functionality](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1485 Best practice is to run the database backend on a dedicated server/cluster and
1486 only expose a virtual IP address to Icinga and the IDO feature. By default, only one
1487 endpoint will actively write to the backend then. Typical setups for MySQL clusters
1488 involve Master-Master-Replication (Master-Slave-Replication in both directions) or Galera,
1489 more tips can be found on our [community forums](https://community.icinga.com/).
1490 The IDO object must have the same `instance_name` on all master nodes.
1492 **Note**: All nodes in the same zone require that you enable the same features for high-availability (HA).
1496 * `icinga2-master1.localdomain` is the config master master node.
1497 * `icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`.
1498 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1502 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1503 * Set up `icinga2-master2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1504 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, set to `y` and add the secondary master `icinga2-master2.localdomain`).
1506 In case you don't want to use the CLI commands, you can also manually create and sync the
1507 required SSL certificates. We will modify and discuss all the details of the automatically generated configuration here.
1509 Since there are now two nodes in the same zone, we must consider the
1510 [high-availability features](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1512 * Checks and notifications are balanced between the two master nodes. That's fine, but it requires check plugins and notification scripts to exist on both nodes.
1513 * The IDO feature will only be active on one node by default. Since all events are replicated between both nodes, it is easier to just have one central database.
1515 One possibility is to use a dedicated MySQL cluster VIP (external application cluster)
1516 and leave the IDO feature with enabled HA capabilities. Alternatively,
1517 you can disable the HA feature and write to a local database on each node.
1518 Both methods require that you configure Icinga Web 2 accordingly (monitoring
1519 backend, IDO database, used transports, etc.).
1523 > You can also start with a single master shown [here](06-distributed-monitoring.md#distributed-monitoring-master-clients) and later add
1524 > the second master. This requires an extra step with the [initial sync](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-initial-sync)
1525 > for cloning the runtime state after done. Once done, proceed here.
1527 The zone hierarchy could look like this. It involves putting the two master nodes
1528 `icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
1531 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1533 object Endpoint "icinga2-master1.localdomain" {
1534 host = "192.168.56.101"
1537 object Endpoint "icinga2-master2.localdomain" {
1538 host = "192.168.56.102"
1541 object Endpoint "icinga2-client1.localdomain" {
1542 host = "192.168.56.111" //the master actively tries to connect to the client
1545 object Endpoint "icinga2-client2.localdomain" {
1546 host = "192.168.56.112" //the master actively tries to connect to the client
1549 object Zone "master" {
1550 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1553 object Zone "icinga2-client1.localdomain" {
1554 endpoints = [ "icinga2-client1.localdomain" ]
1559 object Zone "icinga2-client2.localdomain" {
1560 endpoints = [ "icinga2-client2.localdomain" ]
1565 /* sync global commands */
1566 object Zone "global-templates" {
1571 The two client nodes do not necessarily need to know about each other. The only important thing
1572 is that they know about the parent zone and their endpoint members (and optionally about the global zone).
1574 If you specify the `host` attribute in the `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1575 endpoint objects, the client will actively try to connect to the master node. Since we've specified the client
1576 endpoint's attribute on the master node already, we don't want the clients to connect to the
1577 master nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1580 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1582 object Endpoint "icinga2-master1.localdomain" {
1583 //do not actively connect to the master by leaving out the 'host' attribute
1586 object Endpoint "icinga2-master2.localdomain" {
1587 //do not actively connect to the master by leaving out the 'host' attribute
1590 object Endpoint "icinga2-client1.localdomain" {
1593 object Zone "master" {
1594 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1597 object Zone "icinga2-client1.localdomain" {
1598 endpoints = [ "icinga2-client1.localdomain" ]
1603 /* sync global commands */
1604 object Zone "global-templates" {
1608 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1610 object Endpoint "icinga2-master1.localdomain" {
1611 //do not actively connect to the master by leaving out the 'host' attribute
1614 object Endpoint "icinga2-master2.localdomain" {
1615 //do not actively connect to the master by leaving out the 'host' attribute
1618 object Endpoint "icinga2-client2.localdomain" {
1621 object Zone "master" {
1622 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1625 object Zone "icinga2-client2.localdomain" {
1626 endpoints = [ "icinga2-client2.localdomain" ]
1631 /* sync global commands */
1632 object Zone "global-templates" {
1637 Now it is time to define the two client hosts and apply service checks using
1638 the command endpoint execution method. Note: You can also use the
1639 config sync mode here.
1641 Create a new configuration directory on the master node `icinga2-master1.localdomain`.
1642 **Note**: The secondary master node `icinga2-master2.localdomain` receives the
1643 configuration using the [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync).
1646 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1649 Add the two client nodes as host objects:
1652 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1653 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1655 object Host "icinga2-client1.localdomain" {
1656 check_command = "hostalive"
1657 address = "192.168.56.111"
1658 vars.client_endpoint = name //follows the convention that host name == endpoint name
1661 object Host "icinga2-client2.localdomain" {
1662 check_command = "hostalive"
1663 address = "192.168.56.112"
1664 vars.client_endpoint = name //follows the convention that host name == endpoint name
1668 Add services using command endpoint checks:
1671 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1673 apply Service "ping4" {
1674 check_command = "ping4"
1675 //check is executed on the master node
1676 assign where host.address
1679 apply Service "disk" {
1680 check_command = "disk"
1682 //specify where the check is executed
1683 command_endpoint = host.vars.client_endpoint
1685 assign where host.vars.client_endpoint
1689 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1692 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1693 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1696 Open Icinga Web 2 and check the two newly created client hosts with two new services
1697 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1699 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1700 to make sure that your cluster notifies you in case of failure.
1703 ### Three Levels with Master, Satellites, and Clients <a id="distributed-monitoring-scenarios-master-satellite-client"></a>
1705 This scenario combines everything you've learned so far: High-availability masters,
1706 satellites receiving their configuration from the master zone, and clients checked via command
1707 endpoint from the satellite zones.
1709 ![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
1713 > It can get complicated, so grab a pen and paper and bring your thoughts to life.
1714 > Play around with a test setup before using it in a production environment!
1716 Best practice is to run the database backend on a dedicated server/cluster and
1717 only expose a virtual IP address to Icinga and the IDO feature. By default, only one
1718 endpoint will actively write to the backend then. Typical setups for MySQL clusters
1719 involve Master-Master-Replication (Master-Slave-Replication in both directions) or Galera,
1720 more tips can be found on our [community forums](https://community.icinga.com/).
1724 * `icinga2-master1.localdomain` is the configuration master master node.
1725 * `icinga2-master2.localdomain` is the secondary master master node without configuration in `zones.d`.
1726 * `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone. They forward CSR signing requests to the master zone.
1727 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1731 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1732 * Set up `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1733 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1735 When being asked for the parent endpoint providing CSR auto-signing capabilities,
1736 please add one of the satellite nodes. **Note**: This requires Icinga 2 v2.8+
1737 and the `CA Proxy` on all master, satellite and client nodes.
1739 Example for `icinga2-client1.localdomain`:
1742 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
1745 Parent endpoint is the first satellite `icinga2-satellite1.localdomain`:
1748 Master/Satellite Common Name (CN from your master/satellite node): icinga2-satellite1.localdomain
1749 Do you want to establish a connection to the parent node from this node? [Y/n]: y
1751 Please specify the master/satellite connection information:
1752 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.105
1753 Master/Satellite endpoint port [5665]: 5665
1756 Add the second satellite `icinga2-satellite2.localdomain` as parent:
1759 Add more master/satellite endpoints? [y/N]: y
1761 Master/Satellite Common Name (CN from your master/satellite node): icinga2-satellite2.localdomain
1762 Do you want to establish a connection to the parent node from this node? [Y/n]: y
1764 Please specify the master/satellite connection information:
1765 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.106
1766 Master/Satellite endpoint port [5665]: 5665
1768 Add more master/satellite endpoints? [y/N]: n
1771 The specified parent nodes will forward the CSR signing request to the master instances.
1773 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
1776 Please specify the request ticket generated on your Icinga 2 master (optional).
1777 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
1778 4f75d2ecd253575fe9180938ebff7cbca262f96e
1781 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
1782 you can leave the ticket question blank.
1784 Instead, Icinga 2 tells you to approve the request later on the master node.
1787 No ticket was specified. Please approve the certificate signing request manually
1788 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
1791 You can optionally specify a different bind host and/or port.
1794 Please specify the API bind host/port (optional):
1799 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
1800 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
1803 Accept config from parent node? [y/N]: y
1804 Accept commands from parent node? [y/N]: y
1807 Next you can optionally specify the local and parent zone names. This will be reflected
1808 in the generated zone configuration file.
1811 Local zone name [icinga2-client1.localdomain]: icinga2-client1.localdomain
1814 Set the parent zone name to `satellite` for this client.
1817 Parent zone name [master]: satellite
1820 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
1821 Press `Enter` or choose `n`, if you don't want to add any additional.
1824 Reconfiguring Icinga...
1826 Default global zones: global-templates director-global
1827 Do you want to specify additional global zones? [y/N]: N
1830 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
1831 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
1832 they receive configuration synced from the parent zone.
1835 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
1836 Disabling the inclusion of the conf.d directory...
1840 **We'll discuss the details of the required configuration below. Most of this
1841 configuration can be rendered by the setup wizards.**
1843 The zone hierarchy can look like this. We'll define only the directly connected zones here.
1845 The master instances should actively connect to the satellite instances, therefore
1846 the configuration on `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1847 must include the `host` attribute for the satellite endpoints:
1850 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1852 object Endpoint "icinga2-master1.localdomain" {
1856 object Endpoint "icinga2-master2.localdomain" {
1857 host = "192.168.56.102"
1860 object Endpoint "icinga2-satellite1.localdomain" {
1861 host = "192.168.56.105"
1864 object Endpoint "icinga2-satellite2.localdomain" {
1865 host = "192.168.56.106"
1868 object Zone "master" {
1869 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1872 object Zone "satellite" {
1873 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1878 /* sync global commands */
1879 object Zone "global-templates" {
1883 object Zone "director-global" {
1889 In contrast to that, the satellite instances `icinga2-satellite1.localdomain`
1890 and `icinga2-satellite2.localdomain` should not actively connect to the master
1894 [root@icinga2-satellite1.localdomain /]# vim /etc/icinga2/zones.conf
1896 object Endpoint "icinga2-master1.localdomain" {
1897 //this endpoint will connect to us
1900 object Endpoint "icinga2-master2.localdomain" {
1901 //this endpoint will connect to us
1904 object Endpoint "icinga2-satellite1.localdomain" {
1908 object Endpoint "icinga2-satellite2.localdomain" {
1909 host = "192.168.56.106"
1912 object Zone "master" {
1913 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1916 object Zone "satellite" {
1917 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1922 /* sync global commands */
1923 object Zone "global-templates" {
1927 object Zone "director-global" {
1931 Keep in mind to control the endpoint [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction)
1932 using the `host` attribute, also for other endpoints in the same zone.
1934 Adopt the configuration for `icinga2-master2.localdomain` and `icinga2-satellite2.localdomain`.
1936 Since we want to use [top down command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) checks,
1937 we must configure the client endpoint and zone objects.
1938 In order to minimize the effort, we'll sync the client zone and endpoint configuration to the
1939 satellites where the connection information is needed as well.
1942 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/{master,satellite,global-templates}
1943 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1945 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1947 object Endpoint "icinga2-client1.localdomain" {
1948 host = "192.168.56.111" //the satellite actively tries to connect to the client
1951 object Zone "icinga2-client1.localdomain" {
1952 endpoints = [ "icinga2-client1.localdomain" ]
1954 parent = "satellite"
1957 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1959 object Endpoint "icinga2-client2.localdomain" {
1960 host = "192.168.56.112" //the satellite actively tries to connect to the client
1963 object Zone "icinga2-client2.localdomain" {
1964 endpoints = [ "icinga2-client2.localdomain" ]
1966 parent = "satellite"
1970 The two client nodes do not necessarily need to know about each other, either. The only important thing
1971 is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
1973 If you specify the `host` attribute in the `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain`
1974 endpoint objects, the client node will actively try to connect to the satellite node. Since we've specified the client
1975 endpoint's attribute on the satellite node already, we don't want the client node to connect to the
1976 satellite nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1978 Example for `icinga2-client1.localdomain`:
1981 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1983 object Endpoint "icinga2-satellite1.localdomain" {
1984 //do not actively connect to the satellite by leaving out the 'host' attribute
1987 object Endpoint "icinga2-satellite2.localdomain" {
1988 //do not actively connect to the satellite by leaving out the 'host' attribute
1991 object Endpoint "icinga2-client1.localdomain" {
1995 object Zone "satellite" {
1996 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1999 object Zone "icinga2-client1.localdomain" {
2000 endpoints = [ "icinga2-client1.localdomain" ]
2002 parent = "satellite"
2005 /* sync global commands */
2006 object Zone "global-templates" {
2010 object Zone "director-global" {
2015 Example for `icinga2-client2.localdomain`:
2018 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
2020 object Endpoint "icinga2-satellite1.localdomain" {
2021 //do not actively connect to the satellite by leaving out the 'host' attribute
2024 object Endpoint "icinga2-satellite2.localdomain" {
2025 //do not actively connect to the satellite by leaving out the 'host' attribute
2028 object Endpoint "icinga2-client2.localdomain" {
2032 object Zone "satellite" {
2033 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
2036 object Zone "icinga2-client2.localdomain" {
2037 endpoints = [ "icinga2-client2.localdomain" ]
2039 parent = "satellite"
2042 /* sync global commands */
2043 object Zone "global-templates" {
2047 object Zone "director-global" {
2052 Now it is time to define the two client hosts on the master, sync them to the satellites
2053 and apply service checks using the command endpoint execution method to them.
2054 Add the two client nodes as host objects to the `satellite` zone.
2056 We've already created the directories in `/etc/icinga2/zones.d` including the files for the
2057 zone and endpoint configuration for the clients.
2060 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
2063 Add the host object configuration for the `icinga2-client1.localdomain` client. You should
2064 have created the configuration file in the previous steps and it should contain the endpoint
2065 and zone object configuration already.
2068 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
2070 object Host "icinga2-client1.localdomain" {
2071 check_command = "hostalive"
2072 address = "192.168.56.111"
2073 vars.client_endpoint = name //follows the convention that host name == endpoint name
2077 Add the host object configuration for the `icinga2-client2.localdomain` client configuration file:
2080 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
2082 object Host "icinga2-client2.localdomain" {
2083 check_command = "hostalive"
2084 address = "192.168.56.112"
2085 vars.client_endpoint = name //follows the convention that host name == endpoint name
2089 Add a service object which is executed on the satellite nodes (e.g. `ping4`). Pin the apply rule to the `satellite` zone only.
2092 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
2094 apply Service "ping4" {
2095 check_command = "ping4"
2096 //check is executed on the satellite node
2097 assign where host.zone == "satellite" && host.address
2101 Add services using command endpoint checks. Pin the apply rules to the `satellite` zone only.
2104 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
2106 apply Service "disk" {
2107 check_command = "disk"
2109 //specify where the check is executed
2110 command_endpoint = host.vars.client_endpoint
2112 assign where host.zone == "satellite" && host.vars.client_endpoint
2116 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
2119 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2120 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2123 Open Icinga Web 2 and check the two newly created client hosts with two new services
2124 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
2128 > It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
2129 to make sure that your cluster notifies you in case of failure.
2131 ## Best Practice <a id="distributed-monitoring-best-practice"></a>
2133 We've put together a collection of configuration examples from community feedback.
2134 If you like to share your tips and tricks with us, please join the [community channels](https://icinga.com/community/)!
2136 ### Global Zone for Config Sync <a id="distributed-monitoring-global-zone-config-sync"></a>
2138 Global zones can be used to sync generic configuration objects
2139 to all nodes depending on them. Common examples are:
2141 * Templates which are imported into zone specific objects.
2142 * Command objects referenced by Host, Service, Notification objects.
2143 * Apply rules for services, notifications and dependencies.
2144 * User objects referenced in notifications.
2146 * TimePeriod objects.
2148 Plugin scripts and binaries cannot be synced, this is for Icinga 2
2149 configuration files only. Use your preferred package repository
2150 and/or configuration management tool (Puppet, Ansible, Chef, etc.)
2153 **Note**: Checkable objects (hosts and services) cannot be put into a global
2154 zone. The configuration validation will terminate with an error.
2156 The zone object configuration must be deployed on all nodes which should receive
2157 the global configuration files:
2160 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2162 object Zone "global-commands" {
2167 The default global zones generated by the setup wizards are called `global-templates` and `director-global`.
2169 While you can should `global-templates` for your global configuration, `director-global` is reserved for use
2170 by [Icinga Director](https://icinga.com/docs/director/latest/). Please don't
2171 place any configuration in it manually.
2173 Similar to the zone configuration sync you'll need to create a new directory in
2174 `/etc/icinga2/zones.d`:
2177 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-commands
2180 Next, add a new check command, for example:
2183 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-commands/web.conf
2185 object CheckCommand "webinject" {
2190 Restart the client(s) which should receive the global zone before
2191 before restarting the parent master/satellite nodes.
2193 Then validate the configuration on the master node and restart Icinga 2.
2195 **Tip**: You can copy the example configuration files located in `/etc/icinga2/conf.d`
2196 into the default global zone `global-templates`.
2201 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/conf.d
2202 [root@icinga2-master1.localdomain /etc/icinga2/conf.d]# cp {commands,groups,notifications,services,templates,timeperiods,users}.conf /etc/icinga2/zones.d/global-templates
2205 ### Health Checks <a id="distributed-monitoring-health-checks"></a>
2207 In case of network failures or other problems, your monitoring might
2208 either have late check results or just send out mass alarms for unknown
2211 In order to minimize the problems caused by this, you should configure
2212 additional health checks.
2214 The `cluster` check, for example, will check if all endpoints in the current zone and the directly
2215 connected zones are working properly:
2218 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2219 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2221 object Host "icinga2-master1.localdomain" {
2222 check_command = "hostalive"
2223 address = "192.168.56.101"
2226 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
2228 object Service "cluster" {
2229 check_command = "cluster"
2233 host_name = "icinga2-master1.localdomain"
2237 The `cluster-zone` check will test whether the configured target zone is currently
2238 connected or not. This example adds a health check for the [ha master with clients scenario](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
2241 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2243 apply Service "cluster-health" {
2244 check_command = "cluster-zone"
2246 display_name = "cluster-health-" + host.name
2248 /* This follows the convention that the client zone name is the FQDN which is the same as the host object name. */
2249 vars.cluster_zone = host.name
2251 assign where host.vars.client_endpoint
2255 In case you cannot assign the `cluster_zone` attribute, add specific
2256 checks to your cluster:
2259 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
2261 object Service "cluster-zone-satellite" {
2262 check_command = "cluster-zone"
2265 vars.cluster_zone = "satellite"
2267 host_name = "icinga2-master1.localdomain"
2271 If you are using top down checks with command endpoint configuration, you can
2272 add a dependency which prevents notifications for all other failing services:
2275 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/dependencies.conf
2277 apply Dependency "health-check" to Service {
2278 parent_service_name = "cluster-health"
2281 disable_notifications = true
2283 assign where host.vars.client_endpoint
2284 ignore where service.name == "cluster-health"
2288 ### Pin Checks in a Zone <a id="distributed-monitoring-pin-checks-zone"></a>
2290 In case you want to pin specific checks to their endpoints in a given zone you'll need to use
2291 the `command_endpoint` attribute. This is reasonable if you want to
2292 execute a local disk check in the `master` Zone on a specific endpoint then.
2295 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2296 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2298 object Host "icinga2-master1.localdomain" {
2299 check_command = "hostalive"
2300 address = "192.168.56.101"
2303 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2305 apply Service "disk" {
2306 check_command = "disk"
2308 command_endpoint = host.name //requires a host object matching the endpoint object name e.g. icinga2-master1.localdomain
2310 assign where host.zone == "master" && match("icinga2-master*", host.name)
2314 The `host.zone` attribute check inside the expression ensures that
2315 the service object is only created for host objects inside the `master`
2316 zone. In addition to that the [match](18-library-reference.md#global-functions-match)
2317 function ensures to only create services for the master nodes.
2319 ### Windows Firewall <a id="distributed-monitoring-windows-firewall"></a>
2321 #### ICMP Requests <a id="distributed-monitoring-windows-firewall-icmp"></a>
2323 By default ICMP requests are disabled in the Windows firewall. You can
2324 change that by [adding a new rule](https://support.microsoft.com/en-us/kb/947709).
2327 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
2330 #### Icinga 2 <a id="distributed-monitoring-windows-firewall-icinga2"></a>
2332 If your master/satellite nodes should actively connect to the Windows client
2333 you'll also need to ensure that port `5665` is enabled.
2336 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 5665 (Icinga 2)" dir=in action=allow protocol=TCP localport=5665
2339 #### NSClient++ API <a id="distributed-monitoring-windows-firewall-nsclient-api"></a>
2341 If the [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2342 plugin is used to query NSClient++, you need to ensure that its port is enabled.
2345 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 8443 (NSClient++ API)" dir=in action=allow protocol=TCP localport=8443
2348 For security reasons, it is advised to enable the NSClient++ HTTP API for local
2349 connection from the Icinga 2 client only. Remote connections to the HTTP API
2350 are not recommended with using the legacy HTTP API.
2352 ### Windows Client and Plugins <a id="distributed-monitoring-windows-plugins"></a>
2354 The Icinga 2 package on Windows already provides several plugins.
2355 Detailed [documentation](10-icinga-template-library.md#windows-plugins) is available for all check command definitions.
2357 Add the following `include` statement on all your nodes (master, satellite, client):
2360 vim /etc/icinga2/icinga2.conf
2362 include <windows-plugins>
2365 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2366 scenario we'll now add a local disk check.
2368 First, add the client node as host object:
2371 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2372 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2374 object Host "icinga2-client2.localdomain" {
2375 check_command = "hostalive"
2376 address = "192.168.56.112"
2377 vars.client_endpoint = name //follows the convention that host name == endpoint name
2378 vars.os_type = "windows"
2382 Next, add the disk check using command endpoint checks (details in the
2383 [disk-windows](10-icinga-template-library.md#windows-plugins-disk-windows) documentation):
2386 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2388 apply Service "disk C:" {
2389 check_command = "disk-windows"
2391 vars.disk_win_path = "C:"
2393 //specify where the check is executed
2394 command_endpoint = host.vars.client_endpoint
2396 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2400 Validate the configuration and restart Icinga 2.
2403 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2404 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2407 Open Icinga Web 2 and check your newly added Windows disk check :)
2409 ![Icinga 2 Client Windows](images/distributed-monitoring/icinga2_distributed_windows_client_disk_icingaweb2.png)
2411 If you want to add your own plugins please check [this chapter](05-service-monitoring.md#service-monitoring-requirements)
2412 for the requirements.
2414 ### Windows Client and NSClient++ <a id="distributed-monitoring-windows-nscp"></a>
2416 There are two methods available for querying NSClient++:
2418 * Query the [HTTP API](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api) locally from an Icinga 2 client (requires a running NSClient++ service)
2419 * Run a [local CLI check](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-local) (does not require NSClient++ as a service)
2421 Both methods have their advantages and disadvantages. One thing to
2422 note: If you rely on performance counter delta calculations such as
2423 CPU utilization, please use the HTTP API instead of the CLI sample call.
2425 #### NSCLient++ with check_nscp_api <a id="distributed-monitoring-windows-nscp-check-api"></a>
2427 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2428 you to install the NSClient++ package. In addition to the Windows plugins you can
2429 use the [nscp_api command](10-icinga-template-library.md#nscp-check-api) provided by the Icinga Template Library (ITL).
2431 The initial setup for the NSClient++ API and the required arguments
2432 is the described in the ITL chapter for the [nscp_api](10-icinga-template-library.md#nscp-check-api) CheckCommand.
2434 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2435 scenario we'll now add a local nscp check which queries the NSClient++ API to check the free disk space.
2437 Define a host object called `icinga2-client2.localdomain` on the master. Add the `nscp_api_password`
2438 custom variable and specify the drives to check.
2441 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2442 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2444 object Host "icinga2-client1.localdomain" {
2445 check_command = "hostalive"
2446 address = "192.168.56.111"
2447 vars.client_endpoint = name //follows the convention that host name == endpoint name
2448 vars.os_type = "Windows"
2449 vars.nscp_api_password = "icinga"
2450 vars.drives = [ "C:", "D:" ]
2454 The service checks are generated using an [apply for](03-monitoring-basics.md#using-apply-for)
2455 rule based on `host.vars.drives`:
2458 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2460 apply Service "nscp-api-" for (drive in host.vars.drives) {
2461 import "generic-service"
2463 check_command = "nscp_api"
2464 command_endpoint = host.vars.client_endpoint
2466 //display_name = "nscp-drive-" + drive
2468 vars.nscp_api_host = "localhost"
2469 vars.nscp_api_query = "check_drivesize"
2470 vars.nscp_api_password = host.vars.nscp_api_password
2471 vars.nscp_api_arguments = [ "drive=" + drive ]
2473 ignore where host.vars.os_type != "Windows"
2477 Validate the configuration and restart Icinga 2.
2480 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2481 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2484 Two new services ("nscp-drive-D:" and "nscp-drive-C:") will be visible in Icinga Web 2.
2486 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-api](images/distributed-monitoring/icinga2_distributed_windows_nscp_api_drivesize_icingaweb2.png)
2488 Note: You can also omit the `command_endpoint` configuration to execute
2489 the command on the master. This also requires a different value for `nscp_api_host`
2490 which defaults to `host.address`.
2493 //command_endpoint = host.vars.client_endpoint
2495 //vars.nscp_api_host = "localhost"
2498 You can verify the check execution by looking at the `Check Source` attribute
2499 in Icinga Web 2 or the REST API.
2501 If you want to monitor specific Windows services, you could use the following example:
2504 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2505 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2507 object Host "icinga2-client1.localdomain" {
2508 check_command = "hostalive"
2509 address = "192.168.56.111"
2510 vars.client_endpoint = name //follows the convention that host name == endpoint name
2511 vars.os_type = "Windows"
2512 vars.nscp_api_password = "icinga"
2513 vars.services = [ "Windows Update", "wscsvc" ]
2516 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2518 apply Service "nscp-api-" for (svc in host.vars.services) {
2519 import "generic-service"
2521 check_command = "nscp_api"
2522 command_endpoint = host.vars.client_endpoint
2524 //display_name = "nscp-service-" + svc
2526 vars.nscp_api_host = "localhost"
2527 vars.nscp_api_query = "check_service"
2528 vars.nscp_api_password = host.vars.nscp_api_password
2529 vars.nscp_api_arguments = [ "service=" + svc ]
2531 ignore where host.vars.os_type != "Windows"
2535 #### NSCLient++ with nscp-local <a id="distributed-monitoring-windows-nscp-check-local"></a>
2537 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2538 you to install the NSClient++ package. In addition to the Windows plugins you can
2539 use the [nscp-local commands](10-icinga-template-library.md#nscp-plugin-check-commands)
2540 provided by the Icinga Template Library (ITL).
2542 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++](images/distributed-monitoring/icinga2_distributed_windows_nscp.png)
2544 Add the following `include` statement on all your nodes (master, satellite, client):
2547 vim /etc/icinga2/icinga2.conf
2552 The CheckCommand definitions will automatically determine the installed path
2553 to the `nscp.exe` binary.
2555 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2556 scenario we'll now add a local nscp check querying a given performance counter.
2558 First, add the client node as host object:
2561 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2562 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2564 object Host "icinga2-client1.localdomain" {
2565 check_command = "hostalive"
2566 address = "192.168.56.111"
2567 vars.client_endpoint = name //follows the convention that host name == endpoint name
2568 vars.os_type = "windows"
2572 Next, add a performance counter check using command endpoint checks (details in the
2573 [nscp-local-counter](10-icinga-template-library.md#nscp-check-local-counter) documentation):
2576 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2578 apply Service "nscp-local-counter-cpu" {
2579 check_command = "nscp-local-counter"
2580 command_endpoint = host.vars.client_endpoint
2582 vars.nscp_counter_name = "\\Processor(_total)\\% Processor Time"
2583 vars.nscp_counter_perfsyntax = "Total Processor Time"
2584 vars.nscp_counter_warning = 1
2585 vars.nscp_counter_critical = 5
2587 vars.nscp_counter_showall = true
2589 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2593 Validate the configuration and restart Icinga 2.
2596 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2597 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2600 Open Icinga Web 2 and check your newly added Windows NSClient++ check :)
2602 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-local](images/distributed-monitoring/icinga2_distributed_windows_nscp_counter_icingaweb2.png)
2606 > In order to measure CPU load, you'll need a running NSClient++ service.
2607 > Therefore it is advised to use a local [nscp-api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2608 > check against its REST API.
2610 ## Advanced Hints <a id="distributed-monitoring-advanced-hints"></a>
2612 You can find additional hints in this section if you prefer to go your own route
2613 with automating setups (setup, certificates, configuration).
2615 ### Certificate Auto-Renewal <a id="distributed-monitoring-certificate-auto-renewal"></a>
2617 Icinga 2 v2.8+ added the possibility that nodes request certificate updates
2618 on their own. If their expiration date is soon enough, they automatically
2619 renew their already signed certificate by sending a signing request to the
2620 parent node. You'll also see a message in the logs if certificate renewal
2623 ### High-Availability for Icinga 2 Features <a id="distributed-monitoring-high-availability-features"></a>
2625 All nodes in the same zone require that you enable the same features for high-availability (HA).
2627 By default, the following features provide advanced HA functionality:
2629 * [Checks](06-distributed-monitoring.md#distributed-monitoring-high-availability-checks) (load balanced, automated failover).
2630 * [Notifications](06-distributed-monitoring.md#distributed-monitoring-high-availability-notifications) (load balanced, automated failover).
2631 * [DB IDO](06-distributed-monitoring.md#distributed-monitoring-high-availability-db-ido) (Run-Once, automated failover).
2632 * [Elasticsearch](09-object-types.md#objecttype-elasticsearchwriter)
2633 * [Gelf](09-object-types.md#objecttype-gelfwriter)
2634 * [Graphite](09-object-types.md#objecttype-graphitewriter)
2635 * [InfluxDB](09-object-types.md#objecttype-influxdbwriter)
2636 * [OpenTsdb](09-object-types.md#objecttype-opentsdbwriter)
2637 * [Perfdata](09-object-types.md#objecttype-perfdatawriter) (for PNP)
2639 #### High-Availability with Checks <a id="distributed-monitoring-high-availability-checks"></a>
2641 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2642 have the `checker` feature enabled.
2647 # icinga2 feature enable checker
2650 All nodes in the same zone load-balance the check execution. If one instance shuts down,
2651 the other nodes will automatically take over the remaining checks.
2653 #### High-Availability with Notifications <a id="distributed-monitoring-high-availability-notifications"></a>
2655 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2656 have the `notification` feature enabled.
2661 # icinga2 feature enable notification
2664 Notifications are load-balanced amongst all nodes in a zone. By default this functionality
2666 If your nodes should send out notifications independently from any other nodes (this will cause
2667 duplicated notifications if not properly handled!), you can set `enable_ha = false`
2668 in the [NotificationComponent](09-object-types.md#objecttype-notificationcomponent) feature.
2670 #### High-Availability with DB IDO <a id="distributed-monitoring-high-availability-db-ido"></a>
2672 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2673 have the DB IDO feature enabled.
2675 Example DB IDO MySQL:
2678 # icinga2 feature enable ido-mysql
2681 By default the DB IDO feature only runs on one node. All other nodes in the same zone disable
2682 the active IDO database connection at runtime. The node with the active DB IDO connection is
2683 not necessarily the zone master.
2685 **Note**: The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
2686 for the [IdoMysqlConnection](09-object-types.md#objecttype-idomysqlconnection) or
2687 [IdoPgsqlConnection](09-object-types.md#objecttype-idopgsqlconnection) object on **all** nodes in the
2690 All endpoints will enable the DB IDO feature and connect to the configured
2691 database and dump configuration, status and historical data on their own.
2693 If the instance with the active DB IDO connection dies, the HA functionality will
2694 automatically elect a new DB IDO master.
2696 The DB IDO feature will try to determine which cluster endpoint is currently writing
2697 to the database and bail out if another endpoint is active. You can manually verify that
2698 by running the following query command:
2701 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
2702 status_update_time | endpoint_name
2703 ------------------------+---------------
2704 2016-08-15 15:52:26+02 | icinga2-master1.localdomain
2708 This is useful when the cluster connection between endpoints breaks, and prevents
2709 data duplication in split-brain-scenarios. The failover timeout can be set for the
2710 `failover_timeout` attribute, but not lower than 60 seconds.
2712 ### Endpoint Connection Direction <a id="distributed-monitoring-advanced-hints-connection-direction"></a>
2714 Nodes will attempt to connect to another node when its local [Endpoint](09-object-types.md#objecttype-endpoint) object
2715 configuration specifies a valid `host` attribute (FQDN or IP address).
2717 Example for the master node `icinga2-master1.localdomain` actively connecting
2718 to the client node `icinga2-client1.localdomain`:
2721 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2725 object Endpoint "icinga2-client1.localdomain" {
2726 host = "192.168.56.111" //the master actively tries to connect to the client
2731 Example for the client node `icinga2-client1.localdomain` not actively
2732 connecting to the master node `icinga2-master1.localdomain`:
2735 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2739 object Endpoint "icinga2-master1.localdomain" {
2740 //do not actively connect to the master by leaving out the 'host' attribute
2745 It is not necessary that both the master and the client node establish
2746 two connections to each other. Icinga 2 will only use one connection
2747 and close the second connection if established.
2749 **Tip**: Choose either to let master/satellite nodes connect to client nodes
2753 ### Disable Log Duration for Command Endpoints <a id="distributed-monitoring-advanced-hints-command-endpoint-log-duration"></a>
2755 The replay log is a built-in mechanism to ensure that nodes in a distributed setup
2756 keep the same history (check results, notifications, etc.) when nodes are temporarily
2757 disconnected and then reconnect.
2759 This functionality is not needed when a master/satellite node is sending check
2760 execution events to a client which is purely configured for [command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2763 The [Endpoint](09-object-types.md#objecttype-endpoint) object attribute `log_duration` can
2764 be lower or set to 0 to fully disable any log replay updates when the
2765 client is not connected.
2767 Configuration on the master node `icinga2-master1.localdomain`:
2770 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2774 object Endpoint "icinga2-client1.localdomain" {
2775 host = "192.168.56.111" //the master actively tries to connect to the client
2779 object Endpoint "icinga2-client2.localdomain" {
2780 host = "192.168.56.112" //the master actively tries to connect to the client
2785 Configuration on the client `icinga2-client1.localdomain`:
2788 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2792 object Endpoint "icinga2-master1.localdomain" {
2793 //do not actively connect to the master by leaving out the 'host' attribute
2797 object Endpoint "icinga2-master2.localdomain" {
2798 //do not actively connect to the master by leaving out the 'host' attribute
2803 ### Initial Sync for new Endpoints in a Zone <a id="distributed-monitoring-advanced-hints-initial-sync"></a>
2807 > This is required if you decide to change an already running single endpoint production
2808 > environment into a HA-enabled cluster zone with two endpoints.
2809 > The [initial setup](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients)
2810 > with 2 HA masters doesn't require this step.
2812 In order to make sure that all of your zone endpoints have the same state you need
2813 to pick the authoritative running one and copy the following content:
2815 * State file from `/var/lib/icinga2/icinga2.state`
2816 * Internal config package for runtime created objects (downtimes, comments, hosts, etc.) at `/var/lib/icinga2/api/packages/_api`
2818 If you need already deployed config packages from the Director, or synced cluster zones,
2819 you can also sync the entire `/var/lib/icinga2/api/packages` directory. This directory should also be
2820 included in your [backup strategy](02-installation.md#install-backup).
2822 Do **not** sync `/var/lib/icinga2/api/zones*` manually - this is an internal directory
2823 and handled by the Icinga cluster config sync itself.
2827 > Ensure that all endpoints are shut down during this procedure. Once you have
2828 > synced the cached files, proceed with configuring the remaining endpoints
2829 > to let them know about the new master/satellite node (zones.conf).
2831 ### Manual Certificate Creation <a id="distributed-monitoring-advanced-hints-certificates-manual"></a>
2833 #### Create CA on the Master <a id="distributed-monitoring-advanced-hints-certificates-manual-ca"></a>
2835 Choose the host which should store the certificate authority (one of the master nodes).
2837 The first step is the creation of the certificate authority (CA) by running the following command
2841 [root@icinga2-master1.localdomain /root]# icinga2 pki new-ca
2844 #### Create CSR and Certificate <a id="distributed-monitoring-advanced-hints-certificates-manual-create"></a>
2846 Create a certificate signing request (CSR) for the local instance:
2849 [root@icinga2-master1.localdomain /root]# icinga2 pki new-cert --cn icinga2-master1.localdomain \
2850 --key icinga2-master1.localdomain.key \
2851 --csr icinga2-master1.localdomain.csr
2854 Sign the CSR with the previously created CA:
2857 [root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
2860 Repeat the steps for all instances in your setup.
2862 #### Copy Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-copy"></a>
2864 Copy the host's certificate files and the public CA certificate to `/var/lib/icinga2/certs`:
2867 [root@icinga2-master1.localdomain /root]# mkdir -p /var/lib/icinga2/certs
2868 [root@icinga2-master1.localdomain /root]# cp icinga2-master1.localdomain.{crt,key} /var/lib/icinga2/certs
2869 [root@icinga2-master1.localdomain /root]# cp /var/lib/icinga2/ca/ca.crt /var/lib/icinga2/certs
2872 Ensure that proper permissions are set (replace `icinga` with the Icinga 2 daemon user):
2875 [root@icinga2-master1.localdomain /root]# chown -R icinga:icinga /var/lib/icinga2/certs
2876 [root@icinga2-master1.localdomain /root]# chmod 600 /var/lib/icinga2/certs/*.key
2877 [root@icinga2-master1.localdomain /root]# chmod 644 /var/lib/icinga2/certs/*.crt
2880 The CA public and private key are stored in the `/var/lib/icinga2/ca` directory. Keep this path secure and include
2883 #### Create Multiple Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-multiple"></a>
2885 Use your preferred method to automate the certificate generation process.
2888 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do icinga2 pki new-cert --cn $node --csr $node.csr --key $node.key; done
2889 information/base: Writing private key to 'icinga2-master1.localdomain.key'.
2890 information/base: Writing certificate signing request to 'icinga2-master1.localdomain.csr'.
2891 information/base: Writing private key to 'icinga2-master2.localdomain.key'.
2892 information/base: Writing certificate signing request to 'icinga2-master2.localdomain.csr'.
2893 information/base: Writing private key to 'icinga2-satellite1.localdomain.key'.
2894 information/base: Writing certificate signing request to 'icinga2-satellite1.localdomain.csr'.
2896 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do sudo icinga2 pki sign-csr --csr $node.csr --cert $node.crt; done
2897 information/pki: Writing certificate to file 'icinga2-master1.localdomain.crt'.
2898 information/pki: Writing certificate to file 'icinga2-master2.localdomain.crt'.
2899 information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt'.
2902 Copy and move these certificates to the respective instances e.g. with SSH/SCP.
2904 ## Automation <a id="distributed-monitoring-automation"></a>
2906 These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)
2907 or custom scripts for automated setup.
2909 These are collected best practices from various community channels.
2911 * [Silent Windows setup](06-distributed-monitoring.md#distributed-monitoring-automation-windows-silent)
2912 * [Node Setup CLI command](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup) with parameters
2914 If you prefer an alternate method, we still recommend leaving all the Icinga 2 features intact (e.g. `icinga2 feature enable api`).
2915 You should also use well known and documented default configuration file locations (e.g. `zones.conf`).
2916 This will tremendously help when someone is trying to help in the [community channels](https://icinga.com/community/).
2919 ### Silent Windows Setup <a id="distributed-monitoring-automation-windows-silent"></a>
2921 If you want to install the client silently/unattended, use the `/qn` modifier. The
2922 installation should not trigger a restart, but if you want to be completely sure, you can use the `/norestart` modifier.
2925 C:> msiexec /i C:\Icinga2-v2.5.0-x86.msi /qn /norestart
2928 Once the setup is completed you can use the `node setup` cli command too.
2930 ### Node Setup using CLI Parameters <a id="distributed-monitoring-automation-cli-node-setup"></a>
2932 Instead of using the `node wizard` CLI command, there is an alternative `node setup`
2933 command available which has some prerequisites.
2935 **Note**: The CLI command can be used on Linux/Unix and Windows operating systems.
2936 The graphical Windows setup wizard actively uses these CLI commands.
2938 #### Node Setup on the Master Node <a id="distributed-monitoring-automation-cli-node-setup-master"></a>
2940 In case you want to setup a master node you must add the `--master` parameter
2941 to the `node setup` CLI command. In addition to that the `--cn` can optionally
2942 be passed (defaults to the FQDN).
2944 Parameter | Description
2945 --------------------|--------------------
2946 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN. Defaults to the FQDN.
2947 Zone name | **Optional.** Specified with the `--zone` parameter. Defaults to `master`.
2948 Listen on | **Optional.** Specified with the `--listen` parameter. Syntax is `host,port`.
2949 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive and adds the `api-users.conf` file inclusion to `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2954 [root@icinga2-master1.localdomain /]# icinga2 node setup --master
2957 In case you want to bind the `ApiListener` object to a specific
2958 host/port you can specify it like this:
2961 --listen 192.68.56.101,5665
2964 In case you don't need anything in `conf.d`, use the following command line:
2967 [root@icinga2-master1.localdomain /]# icinga2 node setup --master --disable-confd
2971 #### Node Setup with Satellites/Clients <a id="distributed-monitoring-automation-cli-node-setup-satellite-client"></a>
2973 Make sure that the `/var/lib/icinga2/certs` directory exists and is owned by the `icinga`
2974 user (or the user Icinga 2 is running as).
2977 [root@icinga2-client1.localdomain /]# mkdir -p /var/lib/icinga2/certs
2978 [root@icinga2-client1.localdomain /]# chown -R icinga:icinga /var/lib/icinga2/certs
2981 First you'll need to generate a new local self-signed certificate.
2982 Pass the following details to the `pki new-cert` CLI command:
2984 Parameter | Description
2985 --------------------|--------------------
2986 Common name (CN) | **Required.** By convention this should be the host's FQDN.
2987 Client certificate files | **Required.** These generated files will be put into the specified location (--key and --file). By convention this should be using `/var/lib/icinga2/certs` as directory.
2992 [root@icinga2-client1.localdomain /]# icinga2 pki new-cert --cn icinga2-client1.localdomain \
2993 --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2994 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt
2997 Request the master certificate from the master host (`icinga2-master1.localdomain`)
2998 and store it as `trusted-master.crt`. Review it and continue.
3000 Pass the following details to the `pki save-cert` CLI command:
3002 Parameter | Description
3003 --------------------|--------------------
3004 Client certificate files | **Required.** Pass the previously generated files using the `--key` and `--cert` parameters.
3005 Trusted parent certificate | **Required.** Store the parent's certificate file. Manually verify that you're trusting it.
3006 Parent host | **Required.** FQDN or IP address of the parent host.
3011 [root@icinga2-client1.localdomain /]# icinga2 pki save-cert --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
3012 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt \
3013 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
3014 --host icinga2-master1.localdomain
3017 Continue with the additional node setup step. Specify a local endpoint and zone name (`icinga2-client1.localdomain`)
3018 and set the master host (`icinga2-master1.localdomain`) as parent zone configuration. Specify the path to
3019 the previously stored trusted master certificate.
3021 Pass the following details to the `node setup` CLI command:
3023 Parameter | Description
3024 --------------------|--------------------
3025 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN.
3026 Request ticket | **Required.** Add the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing).
3027 Trusted master certificate | **Required.** Add the previously fetched trusted master certificate (this step means that you've verified its origin).
3028 Parent host | **Optional.** FQDN or IP address of the parent host. This is where the command connects for CSR signing. If not specified, you need to manually copy the parent's public CA certificate file into `/var/lib/icinga2/certs/ca.crt` in order to start Icinga 2.
3029 Parent endpoint | **Required.** Specify the parent's endpoint name.
3030 Client zone name | **Required.** Specify the client's zone name.
3031 Parent zone name | **Optional.** Specify the parent's zone name.
3032 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)).
3033 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
3034 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`.
3035 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive in `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
3039 > The `master_host` parameter is deprecated and will be removed. Please use `--parent_host` instead.
3044 [root@icinga2-client1.localdomain /]# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
3045 --cn icinga2-client1.localdomain \
3046 --endpoint icinga2-master1.localdomain \
3047 --zone icinga2-client1.localdomain \
3048 --parent_zone master \
3049 --parent_host icinga2-master1.localdomain \
3050 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
3051 --accept-commands --accept-config \
3055 In case the client should connect to the master node, you'll
3056 need to modify the `--endpoint` parameter using the format `cn,host,port`:
3059 --endpoint icinga2-master1.localdomain,192.168.56.101,5665
3062 Specify the parent zone using the `--parent_zone` parameter. This is useful
3063 if the client connects to a satellite, not the master instance.
3066 --parent_zone satellite
3069 In case the client should know the additional global zone `linux-templates`, you'll
3070 need to set the `--global-zones` parameter.
3073 --global_zones linux-templates
3076 The `--parent-host` parameter is optional since v2.9 and allows you to perform a connection-less setup.
3077 You cannot restart Icinga 2 yet, the CLI command asked to to manually copy the parent's public CA
3078 certificate file in `/var/lib/icinga2/certs/ca.crt`. Once Icinga 2 is started, it sends
3079 a ticket signing request to the parent node. If you have provided a ticket, the master node
3080 signs the request and sends it back to the client which performs a certificate update in-memory.
3082 In case you did not provide a ticket, you need to manually sign the CSR on the master node
3083 which holds the CA's key pair.
3086 **You can find additional best practices below.**
3088 If this client node is configured as [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
3089 you can safely disable the `checker` feature. The `node setup` CLI command already disabled the `notification` feature.
3092 [root@icinga2-client1.localdomain /]# icinga2 feature disable checker
3095 Disable "conf.d" inclusion if this is a [top down](06-distributed-monitoring.md#distributed-monitoring-top-down)
3099 [root@icinga2-client1.localdomain /]# sed -i 's/include_recursive "conf.d"/\/\/include_recursive "conf.d"/g' /etc/icinga2/icinga2.conf
3102 **Note**: This is the default since v2.9.
3104 **Optional**: Add an ApiUser object configuration for remote troubleshooting.
3107 [root@icinga2-client1.localdomain /]# cat <<EOF >/etc/icinga2/conf.d/api-users.conf
3108 object ApiUser "root" {
3109 password = "clientsupersecretpassword"
3115 In case you've previously disabled the "conf.d" directory only
3116 add the file file `conf.d/api-users.conf`:
3119 [root@icinga2-client1.localdomain /]# echo 'include "conf.d/api-users.conf"' >> /etc/icinga2/icinga2.conf
3122 Finally restart Icinga 2.
3125 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
3128 Your automation tool must then configure master node in the meantime.
3131 # cat <<EOF >>/etc/icinga2/zones.conf
3132 object Endpoint "icinga2-client1.localdomain" {
3133 //client connects itself
3136 object Zone "icinga2-client1.localdomain" {
3137 endpoints = [ "icinga2-client1.localdomain" ]
3144 ## Using Multiple Environments <a id="distributed-monitoring-environments"></a>
3146 In some cases it can be desired to run multiple Icinga instances on the same host.
3147 Two potential scenarios include:
3149 * Different versions of the same monitoring configuration (e.g. production and testing)
3150 * Disparate sets of checks for entirely unrelated monitoring environments (e.g. infrastructure and applications)
3152 The configuration is done with the global constants `ApiBindHost` and `ApiBindPort`
3153 or the `bind_host` and `bind_port` attributes of the
3154 [ApiListener](09-object-types.md#objecttype-apilistener) object.
3156 The environment must be set with the global constant `Environment` or as object attribute
3157 of the [IcingaApplication](09-object-types.md#objecttype-icingaapplication) object.
3159 In any case the constant is default value for the attribute and the direct configuration in the objects
3160 have more precedence. The constants have been added to allow the values being set from the CLI on startup.
3162 When Icinga establishes a TLS connection to another cluster instance it automatically uses the [SNI extension](https://en.wikipedia.org/wiki/Server_Name_Indication)
3163 to signal which endpoint it is attempting to connect to. On its own this can already be used to position multiple
3164 Icinga instances behind a load balancer.
3166 SNI example: `icinga2-client1.localdomain`
3168 However, if the environment is configured to `production`, Icinga appends the environment name to the SNI hostname like this:
3170 SNI example with environment: `icinga2-client1.localdomain:production`
3172 Middleware like loadbalancers or TLS proxies can read the SNI header and route the connection to the appropriate target.
3173 I.e., it uses a single externally-visible TCP port (usually 5665) and forwards connections to one or more Icinga
3174 instances which are bound to a local TCP port. It does so by inspecting the environment name that is sent as part of the