1 # Distributed Monitoring with Master, Satellites, and Clients <a id="distributed-monitoring"></a>
3 This chapter will guide you through the setup of a distributed monitoring
4 environment, including high-availability clustering and setup details
5 for the Icinga 2 client.
7 ## Roles: Master, Satellites, and Clients <a id="distributed-monitoring-roles"></a>
9 Icinga 2 nodes can be given names for easier understanding:
11 * A `master` node which is on top of the hierarchy.
12 * A `satellite` node which is a child of a `satellite` or `master` node.
13 * A `client` node which works as an `agent` connected to `master` and/or `satellite` nodes.
15 ![Icinga 2 Distributed Roles](images/distributed-monitoring/icinga2_distributed_roles.png)
17 Rephrasing this picture into more details:
19 * A `master` node has no parent node.
20 * A `master`node is where you usually install Icinga Web 2.
21 * A `master` node can combine executed checks from child nodes into backends and notifications.
22 * A `satellite` node has a parent and a child node.
23 * A `satellite` node may execute checks on its own or delegate check execution to child nodes.
24 * A `satellite` node can receive configuration for hosts/services, etc. from the parent node.
25 * A `satellite` node continues to run even if the master node is temporarily unavailable.
26 * A `client` node only has a parent node.
27 * A `client` node will either run its own configured checks or receive command execution events from the parent node.
29 The following sections will refer to these roles and explain the
30 differences and the possibilities this kind of setup offers.
32 **Tip**: If you just want to install a single master node that monitors several hosts
33 (i.e. Icinga 2 clients), continue reading -- we'll start with
35 In case you are planning a huge cluster setup with multiple levels and
36 lots of clients, read on -- we'll deal with these cases later on.
38 The installation on each system is the same: You need to install the
39 [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and the required [plugins](02-getting-started.md#setting-up-check-plugins).
41 The required configuration steps are mostly happening
42 on the command line. You can also [automate the setup](06-distributed-monitoring.md#distributed-monitoring-automation).
44 The first thing you need learn about a distributed setup is the hierarchy of the single components.
46 ## Zones <a id="distributed-monitoring-zones"></a>
48 The Icinga 2 hierarchy consists of so-called [zone](09-object-types.md#objecttype-zone) objects.
49 Zones depend on a parent-child relationship in order to trust each other.
51 ![Icinga 2 Distributed Zones](images/distributed-monitoring/icinga2_distributed_zones.png)
53 Have a look at this example for the `satellite` zones which have the `master` zone as a parent zone:
55 object Zone "master" {
59 object Zone "satellite region 1" {
64 object Zone "satellite region 2" {
69 There are certain limitations for child zones, e.g. their members are not allowed
70 to send configuration commands to the parent zone members. Vice versa, the
71 trust hierarchy allows for example the `master` zone to send
72 configuration files to the `satellite` zone. Read more about this
73 in the [security section](06-distributed-monitoring.md#distributed-monitoring-security).
75 `client` nodes also have their own unique zone. By convention you
76 can use the FQDN for the zone name.
78 ## Endpoints <a id="distributed-monitoring-endpoints"></a>
80 Nodes which are a member of a zone are so-called [Endpoint](09-object-types.md#objecttype-endpoint) objects.
82 ![Icinga 2 Distributed Endpoints](images/distributed-monitoring/icinga2_distributed_endpoints.png)
84 Here is an example configuration for two endpoints in different zones:
86 object Endpoint "icinga2-master1.localdomain" {
87 host = "192.168.56.101"
90 object Endpoint "icinga2-satellite1.localdomain" {
91 host = "192.168.56.105"
94 object Zone "master" {
95 endpoints = [ "icinga2-master1.localdomain" ]
98 object Zone "satellite" {
99 endpoints = [ "icinga2-satellite1.localdomain" ]
103 All endpoints in the same zone work as high-availability setup. For
104 example, if you have two nodes in the `master` zone, they will load-balance the check execution.
106 Endpoint objects are important for specifying the connection
107 information, e.g. if the master should actively try to connect to a client.
109 The zone membership is defined inside the `Zone` object definition using
110 the `endpoints` attribute with an array of `Endpoint` names.
112 If you want to check the availability (e.g. ping checks) of the node
113 you still need a [Host](09-object-types.md#objecttype-host) object.
115 ## ApiListener <a id="distributed-monitoring-apilistener"></a>
117 In case you are using the CLI commands later, you don't have to write
118 this configuration from scratch in a text editor.
119 The [ApiListener](09-object-types.md#objecttype-apilistener) object is
120 used to load the TLS certificates and specify restrictions, e.g.
121 for accepting configuration commands.
123 It is also used for the [Icinga 2 REST API](12-icinga2-api.md#icinga2-api) which shares
124 the same host and port with the Icinga 2 Cluster protocol.
126 The object configuration is stored in the `/etc/icinga2/features-enabled/api.conf`
127 file. Depending on the configuration mode the attributes `accept_commands`
128 and `accept_config` can be configured here.
130 In order to use the `api` feature you need to enable it and restart Icinga 2.
132 icinga2 feature enable api
134 ## Conventions <a id="distributed-monitoring-conventions"></a>
136 By convention all nodes should be configured using their FQDN.
138 Furthermore, you must ensure that the following names
139 are exactly the same in all configuration files:
141 * Host certificate common name (CN).
142 * Endpoint configuration object for the host.
143 * NodeName constant for the local host.
145 Setting this up on the command line will help you to minimize the effort.
146 Just keep in mind that you need to use the FQDN for endpoints and for
147 common names when asked.
149 ## Security <a id="distributed-monitoring-security"></a>
151 While there are certain mechanisms to ensure a secure communication between all
152 nodes (firewalls, policies, software hardening, etc.), Icinga 2 also provides
155 * SSL certificates are mandatory for communication between nodes. The CLI commands
156 help you create those certificates.
157 * Child zones only receive updates (check results, commands, etc.) for their configured objects.
158 * Child zones are not allowed to push configuration updates to parent zones.
159 * Zones cannot interfere with other zones and influence each other. Each checkable host or service object is assigned to **one zone** only.
160 * All nodes in a zone trust each other.
161 * [Config sync](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync) and [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) is disabled by default.
163 The underlying protocol uses JSON-RPC event notifications exchanged by nodes.
164 The connection is secured by TLS. The message protocol uses an internal API,
165 and as such message types and names may change internally and are not documented.
167 Zones build the trust relationship in a distributed environment. If you do not specify
168 a zone for a client and specify the parent zone, its zone members e.g. the master instance
169 won't trust the client.
171 Building this trust is key in your distributed environment. That way the parent node
172 knows that it is able to send messages to the child zone, e.g. configuration objects,
173 configuration in global zones, commands to be executed in this zone/for this endpoint.
174 It also receives check results from the child zone for checkable objects (host/service).
176 Vice versa, the client trusts the master and accepts configuration and commands if enabled
177 in the api feature. If the client would send configuration to the parent zone, the parent nodes
178 will deny it. The parent zone is the configuration entity, and does not trust clients in this matter.
179 A client could attempt to modify a different client for example, or inject a check command
182 While it may sound complicated for client setups, it removes the problem with different roles
183 and configurations for a master and a client. Both of them work the same way, are configured
184 in the same way (Zone, Endpoint, ApiListener), and you can troubleshoot and debug them in just one go.
186 ## Master Setup <a id="distributed-monitoring-setup-master"></a>
188 This section explains how to install a central single master node using
189 the `node wizard` command. If you prefer to do an automated installation, please
190 refer to the [automated setup](06-distributed-monitoring.md#distributed-monitoring-automation) section.
192 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
193 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
196 **Note**: Windows is not supported for a master node setup.
198 The next step is to run the `node wizard` CLI command. Prior to that
199 ensure to collect the required information:
201 Parameter | Description
202 --------------------|--------------------
203 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
204 Master zone name | **Optional.** Allows to specify the master zone name. Defaults to `master`.
205 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
206 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
207 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
208 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `y`. Configuration on the master is discussed below.
210 The setup wizard will ensure that the following steps are taken:
212 * Enable the `api` feature.
213 * Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if it doesn't exist.
214 * Create a certificate for this node signed by the CA key.
215 * Update the [zones.conf](04-configuring-icinga-2.md#zones-conf) file with the new zone hierarchy.
216 * Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuring-icinga-2.md#constants-conf) configuration.
217 * Update the [icinga2.conf](04-configuring-icinga-2.md#icinga2-conf) to disable the `conf.d` inclusion, and add the `api-users.conf` file inclusion.
219 Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
222 [root@icinga2-master1.localdomain /]# icinga2 node wizard
224 Welcome to the Icinga 2 Setup Wizard!
226 We will guide you through all required configuration details.
228 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]: n
230 Starting the Master setup routine...
232 Please specify the common name (CN) [icinga2-master1.localdomain]: icinga2-master1.localdomain
233 Reconfiguring Icinga...
234 Checking for existing certificates for common name 'icinga2-master1.localdomain'...
235 Certificates not yet generated. Running 'api setup' now.
236 Generating master configuration for Icinga 2.
237 Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
239 Master zone name [master]:
241 Default global zones: global-templates director-global
242 Do you want to specify additional global zones? [y/N]: N
244 Please specify the API bind host/port (optional):
248 Do you want to disable the inclusion of the conf.d directory [Y/n]:
249 Disabling the inclusion of the conf.d directory...
250 Checking if the api-users.conf file exists...
254 Now restart your Icinga 2 daemon to finish the installation!
257 You can verify that the CA public and private keys are stored in the `/var/lib/icinga2/ca` directory.
258 Keep this path secure and include it in your [backups](02-getting-started.md#install-backup).
260 In case you lose the CA private key you have to generate a new CA for signing new client
261 certificate requests. You then have to also re-create new signed certificates for all
264 Once the master setup is complete, you can also use this node as primary [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing)
265 master. The following section will explain how to use the CLI commands in order to fetch their
266 signed certificate from this master node.
268 ## Signing Certificates on the Master <a id="distributed-monitoring-setup-sign-certificates-master"></a>
270 All certificates must be signed by the same certificate authority (CA). This ensures
271 that all nodes trust each other in a distributed monitoring environment.
273 This CA is generated during the [master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master)
274 and should be the same on all master instances.
276 You can avoid signing and deploying certificates [manually](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
277 by using built-in methods for auto-signing certificate signing requests (CSR):
279 * [CSR Auto-Signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) which uses a client ticket generated on the master as trust identifier.
280 * [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) which allows to sign pending certificate requests on the master.
282 Both methods are described in detail below.
286 > [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) is available in Icinga 2 v2.8+.
288 ### CSR Auto-Signing <a id="distributed-monitoring-setup-csr-auto-signing"></a>
290 A client which sends a certificate signing request (CSR) must authenticate itself
291 in a trusted way. The master generates a client ticket which is included in this request.
292 That way the master can verify that the request matches the previously trusted ticket
293 and sign the request.
297 > Icinga 2 v2.8 adds the possibility to forward signing requests on a satellite
298 > to the master node. This helps with the setup of [three level clusters](#06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
303 * Nodes can be installed by different users who have received the client ticket.
304 * No manual interaction necessary on the master node.
305 * Automation tools like Puppet, Ansible, etc. can retrieve the pre-generated ticket in their client catalog
306 and run the node setup directly.
310 * Tickets need to be generated on the master and copied to client setup wizards.
311 * No central signing management.
314 Setup wizards for satellite/client nodes will ask you for this specific client ticket.
316 There are two possible ways to retrieve the ticket:
318 * [CLI command](11-cli-commands.md#cli-command-pki) executed on the master node.
319 * [REST API](12-icinga2-api.md#icinga2-api) request against the master node.
321 Required information:
323 Parameter | Description
324 --------------------|--------------------
325 Common name (CN) | **Required.** The common name for the satellite/client. By convention this should be the FQDN.
327 The following example shows how to generate a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
329 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
331 Querying the [Icinga 2 API](12-icinga2-api.md#icinga2-api) on the master requires an [ApiUser](12-icinga2-api.md#icinga2-api-authentication)
332 object with at least the `actions/generate-ticket` permission.
334 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/conf.d/api-users.conf
336 object ApiUser "client-pki-ticket" {
337 password = "bea11beb7b810ea9ce6ea" //change this
338 permissions = [ "actions/generate-ticket" ]
341 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
343 Retrieve the ticket on the master node `icinga2-master1.localdomain` with `curl`, for example:
345 [root@icinga2-master1.localdomain /]# curl -k -s -u client-pki-ticket:bea11beb7b810ea9ce6ea -H 'Accept: application/json' \
346 -X POST 'https://localhost:5665/v1/actions/generate-ticket' -d '{ "cn": "icinga2-client1.localdomain" }'
348 Store that ticket number for the satellite/client setup below.
352 > Never expose the ticket salt and/or ApiUser credentials to your client nodes.
353 > Example: Retrieve the ticket on the Puppet master node and send the compiled catalog
354 > to the authorized Puppet agent node which will invoke the
355 > [automated setup steps](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup).
357 ### On-Demand CSR Signing <a id="distributed-monitoring-setup-on-demand-csr-signing"></a>
359 Icinga 2 v2.8 adds the possibility to sign certificates from clients without
360 requiring a client ticket for auto-signing.
362 Instead, the client sends a certificate signing request to specified parent node.
363 This could either be directly the master, or a satellite which forwards the request
364 to the signing master.
368 * Central certificate request signing management.
369 * No pre-generated ticket is required for client setups.
373 * Asynchronous step for automated deployments.
374 * Needs client verification on the master.
377 You can list certificate requests by using the `ca list` CLI command. This also shows
378 which requests already have been signed.
381 [root@icinga2-master1.localdomain /]# icinga2 ca list
382 Fingerprint | Timestamp | Signed | Subject
383 -----------------------------------------------------------------|---------------------|--------|--------
384 403da5b228df384f07f980f45ba50202529cded7c8182abf96740660caa09727 | 2017/09/06 17:02:40 | * | CN = icinga2-client1.localdomain
385 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
388 **Tip**: Add `--json` to the CLI command to retrieve the details in JSON format.
390 If you want to sign a specific request, you need to use the `ca sign` CLI command
391 and pass its fingerprint as argument.
394 [root@icinga2-master1.localdomain /]# icinga2 ca sign 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850
395 information/cli: Signed certificate for 'CN = icinga2-client2.localdomain'.
398 ## Client/Satellite Setup <a id="distributed-monitoring-setup-satellite-client"></a>
400 This section describes the setup of a satellite and/or client connected to an
401 existing master node setup. If you haven't done so already, please [run the master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master).
403 Icinga 2 on the master node must be running and accepting connections on port `5665`.
406 ### Client/Satellite Setup on Linux <a id="distributed-monitoring-setup-client-linux"></a>
408 Please ensure that you've run all the steps mentioned in the [client/satellite section](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
410 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
411 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
414 The next step is to run the `node wizard` CLI command.
416 In this example we're generating a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
418 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
419 4f75d2ecd253575fe9180938ebff7cbca262f96e
421 Note: You don't need this step if you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
423 Start the wizard on the client `icinga2-client1.localdomain`:
426 [root@icinga2-client1.localdomain /]# icinga2 node wizard
428 Welcome to the Icinga 2 Setup Wizard!
430 We will guide you through all required configuration details.
433 Press `Enter` or add `y` to start a satellite or client setup.
436 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]:
439 Press `Enter` to use the proposed name in brackets, or add a specific common name (CN). By convention
440 this should be the FQDN.
443 Starting the Client/Satellite setup routine...
445 Please specify the common name (CN) [icinga2-client1.localdomain]: icinga2-client1.localdomain
448 Specify the direct parent for this node. This could be your primary master `icinga2-master1.localdomain`
449 or a satellite node in a multi level cluster scenario.
452 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
453 Master/Satellite Common Name (CN from your master/satellite node): icinga2-master1.localdomain
456 Press `Enter` or choose `y` to establish a connection to the parent node.
459 Do you want to establish a connection to the parent node from this node? [Y/n]:
464 > If this node cannot connect to the parent node, choose `n`. The setup
465 > wizard will provide instructions for this scenario -- signing questions are disabled then.
467 Add the connection details for `icinga2-master1.localdomain`.
470 Please specify the master/satellite connection information:
471 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.101
472 Master/Satellite endpoint port [5665]: 5665
475 You can add more parent nodes if necessary. Press `Enter` or choose `n`
476 if you don't want to add any. This comes in handy if you have more than one
477 parent node, e.g. two masters or two satellites.
480 Add more master/satellite endpoints? [y/N]:
483 Verify the parent node's certificate:
486 Parent certificate information:
488 Subject: CN = icinga2-master1.localdomain
489 Issuer: CN = Icinga CA
490 Valid From: Sep 7 13:41:24 2017 GMT
491 Valid Until: Sep 3 13:41:24 2032 GMT
492 Fingerprint: AC 99 8B 2B 3D B0 01 00 E5 21 FA 05 2E EC D5 A9 EF 9E AA E3
494 Is this information correct? [y/N]: y
497 The setup wizard fetches the parent node's certificate and ask
498 you to verify this information. This is to prevent MITM attacks or
499 any kind of untrusted parent relationship.
501 Note: The certificate is not fetched if you have chosen not to connect
504 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
507 Please specify the request ticket generated on your Icinga 2 master (optional).
508 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
509 4f75d2ecd253575fe9180938ebff7cbca262f96e
512 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
513 you can leave the ticket question blank.
515 Instead, Icinga 2 tells you to approve the request later on the master node.
518 No ticket was specified. Please approve the certificate signing request manually
519 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
522 You can optionally specify a different bind host and/or port.
525 Please specify the API bind host/port (optional):
530 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
531 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
534 Accept config from parent node? [y/N]: y
535 Accept commands from parent node? [y/N]: y
538 Next you can optionally specify the local and parent zone names. This will be reflected
539 in the generated zone configuration file.
541 Set the local zone name to something else, if you are installing a satellite or secondary master instance.
544 Local zone name [icinga2-client1.localdomain]:
547 Set the parent zone name to something else than `master` if this client connects to a satellite instance instead of the master.
550 Parent zone name [master]:
553 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
554 Press `Enter` or choose `n`, if you don't want to add any additional.
557 Reconfiguring Icinga...
559 Default global zones: global-templates director-global
560 Do you want to specify additional global zones? [y/N]: N
563 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
564 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
565 they receive configuration synced from the parent zone.
568 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
569 Disabling the inclusion of the conf.d directory...
573 The wizard proceeds and you are good to go.
578 Now restart your Icinga 2 daemon to finish the installation!
583 > If you have chosen not to connect to the parent node, you cannot start
584 > Icinga 2 yet. The wizard asked you to manually copy the master's public
585 > CA certificate file into `/var/lib/icinga2/certs/ca.crt`.
587 > You need to manually sign the CSR on the master node.
589 Restart Icinga 2 as requested.
592 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
595 Here is an overview of all parameters in detail:
597 Parameter | Description
598 --------------------|--------------------
599 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
600 Master common name | **Required.** Use the common name you've specified for your master node before.
601 Establish connection to the parent node | **Optional.** Whether the node should attempt to connect to the parent node or not. Defaults to `y`.
602 Master/Satellite endpoint host | **Required if the the client needs to connect to the master/satellite.** The parent endpoint's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
603 Master/Satellite endpoint port | **Optional if the the client needs to connect to the master/satellite.** The parent endpoints's listening port. This information is included in the `Endpoint` object configuration.
604 Add more master/satellite endpoints | **Optional.** If you have multiple master/satellite nodes configured, add them here.
605 Parent Certificate information | **Required.** Verify that the connecting host really is the requested master node.
606 Request ticket | **Optional.** Add the [ticket](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) generated on the master.
607 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
608 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
609 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
610 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
611 Local zone name | **Optional.** Allows to specify the name for the local zone. This comes in handy when this instance is a satellite, not a client. Defaults to the FQDN.
612 Parent zone name | **Optional.** Allows to specify the name for the parent zone. This is important if the client has a satellite instance as parent, not the master. Defaults to `master`.
613 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
614 Disable conf.d | **Optional.** Allows to disable the inclusion of the `conf.d` directory which holds local example configuration. Clients should retrieve their configuration from the parent node, or act as command endpoint execution bridge. Defaults to `y`.
616 The setup wizard will ensure that the following steps are taken:
618 * Enable the `api` feature.
619 * Create a certificate signing request (CSR) for the local node.
620 * Request a signed certificate i(optional with the provided ticket number) on the master node.
621 * Allow to verify the parent node's certificate.
622 * Store the signed client certificate and ca.crt in `/var/lib/icinga2/certs`.
623 * Update the `zones.conf` file with the new zone hierarchy.
624 * Update `/etc/icinga2/features-enabled/api.conf` (`accept_config`, `accept_commands`) and `constants.conf`.
625 * Update `/etc/icinga2/icinga2.conf` and comment out `include_recursive "conf.d"`.
627 You can verify that the certificate files are stored in the `/var/lib/icinga2/certs` directory.
631 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
636 > If the client is not directly connected to the certificate signing master,
637 > signing requests and responses might need some minutes to fully update the client certificates.
639 > If you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
640 > certificates need to be signed on the master first. Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
642 Now that you've successfully installed a Linux/Unix satellite/client instance, please proceed to
643 the [configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
647 ### Client Setup on Windows <a id="distributed-monitoring-setup-client-windows"></a>
649 Download the MSI-Installer package from [https://packages.icinga.com/windows/](https://packages.icinga.com/windows/).
653 * Windows Vista/Server 2008 or higher
654 * Versions older than Windows 10/Server 2016 require the [Universal C Runtime for Windows](https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows)
655 * [Microsoft .NET Framework 2.0](https://www.microsoft.com/de-de/download/details.aspx?id=1639) for the setup wizard
657 The installer package includes the [NSClient++](https://www.nsclient.org/) package
658 so that Icinga 2 can use its built-in plugins. You can find more details in
659 [this chapter](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
660 The Windows package also installs native [monitoring plugin binaries](06-distributed-monitoring.md#distributed-monitoring-windows-plugins)
661 to get you started more easily.
665 > Please note that Icinga 2 was designed to run as light-weight client on Windows.
666 > There is no support for satellite instances.
668 #### Windows Client Setup Start <a id="distributed-monitoring-setup-client-windows-start"></a>
670 Run the MSI-Installer package and follow the instructions shown in the screenshots.
672 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_01.png)
673 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_02.png)
674 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_03.png)
675 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_04.png)
676 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_05.png)
678 The graphical installer offers to run the Icinga 2 setup wizard after the installation. Select
679 the check box to proceed.
683 > You can also run the Icinga 2 setup wizard from the Start menu later.
685 On a fresh installation the setup wizard guides you through the initial configuration.
686 It also provides a mechanism to send a certificate request to the [CSR signing master](distributed-monitoring-setup-sign-certificates-master).
688 The following configuration details are required:
690 Parameter | Description
691 --------------------|--------------------
692 Instance name | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
693 Setup ticket | **Optional.** Paste the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing). If left blank, the certificate request must be [signed on the master node](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
695 Fill in the required information and click `Add` to add a new master connection.
697 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_01.png)
699 Add the following details:
701 Parameter | Description
702 -------------------------------|-------------------------------
703 Instance name | **Required.** The master/satellite endpoint name where this client is a direct child of.
704 Master/Satellite endpoint host | **Required.** The master or satellite's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
705 Master/Satellite endpoint port | **Optional.** The master or satellite's listening port. This information is included in the `Endpoint` object configuration.
707 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02.png)
709 When needed you can add an additional global zone (the zones `global-templates` and `director-global` are added by default):
711 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02_global_zone.png)
713 Optionally enable the following settings:
715 Parameter | Description
716 ----------------------------------|----------------------------------
717 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
718 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
719 Run Icinga 2 service as this user | **Optional.** Specify a different Windows user. This defaults to `NT AUTHORITY\Network Service` and is required for more privileged service checks.
720 Install NSClient++ | **Optional.** The Windows installer bundles the NSClient++ installer for additional [plugin checks](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
721 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `true`.
723 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_03.png)
725 Verify the certificate from the master/satellite instance where this node should connect to.
727 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_04.png)
730 #### Bundled NSClient++ Setup <a id="distributed-monitoring-setup-client-windows-nsclient"></a>
732 If you have chosen to install/update the NSClient++ package, the Icinga 2 setup wizard asks
735 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_01.png)
737 Choose the `Generic` setup.
739 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_02.png)
741 Choose the `Custom` setup type.
743 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_03.png)
745 NSClient++ does not install a sample configuration by default. Change this as shown in the screenshot.
747 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_04.png)
749 Generate a secure password and enable the web server module. **Note**: The webserver module is
750 available starting with NSClient++ 0.5.0. Icinga 2 v2.6+ is required which includes this version.
752 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_05.png)
754 Finish the installation.
756 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_06.png)
758 Open a web browser and navigate to `https://localhost:8443`. Enter the password you've configured
759 during the setup. In case you lost it, look into the `C:\Program Files\NSClient++\nsclient.ini`
762 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_07.png)
764 The NSClient++ REST API can be used to query metrics. [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
765 uses this transport method.
768 #### Finish Windows Client Setup <a id="distributed-monitoring-setup-client-windows-finish"></a>
770 Finish the Windows setup wizard.
772 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_with_ticket.png)
774 If you did not provide a setup ticket, you need to sign the certificate request on the master.
775 The setup wizards tells you to do so. The Icinga 2 service is running at this point already
776 and will automatically receive and update a signed client certificate.
780 > Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
783 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_no_ticket.png)
785 Icinga 2 is automatically started as a Windows service.
787 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_running_service.png)
789 The Icinga 2 configuration is stored inside the `C:\ProgramData\icinga2` directory.
790 Click `Examine Config` in the setup wizard to open a new Explorer window.
792 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_examine_config.png)
794 The configuration files can be modified with your favorite editor e.g. Notepad.
796 In order to use the [top down](06-distributed-monitoring.md#distributed-monitoring-top-down) client
797 configuration prepare the following steps.
799 Add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
800 for syncing check commands later. Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
801 the `zones.conf` file in your preferred editor. Add the following lines if not existing already:
804 object Zone "global-templates" {
811 > Packages >= 2.8 provide this configuration by default.
813 You don't need any local configuration on the client except for
814 CheckCommand definitions which can be synced using the global zone
815 above. Therefore disable the inclusion of the `conf.d` directory
816 in the `icinga2.conf` file.
817 Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
818 the `icinga2.conf` file in your preferred editor. Remove or comment (`//`)
822 // Commented out, not required on a client with top down mode
823 //include_recursive "conf.d"
828 > Packages >= 2.9 provide an option in the setup wizard to disable this.
829 > Defaults to disabled.
831 Validate the configuration on Windows open an administrator terminal
832 and run the following command:
835 C:\WINDOWS\system32>cd "C:\Program Files\ICINGA2\sbin"
836 C:\Program Files\ICINGA2\sbin>icinga2.exe daemon -C
839 **Note**: You have to run this command in a shell with `administrator` privileges.
841 Now you need to restart the Icinga 2 service. Run `services.msc` from the start menu
842 and restart the `icinga2` service. Alternatively, you can use the `net {start,stop}` CLI commands.
844 ![Icinga 2 Windows Service Start/Stop](images/distributed-monitoring/icinga2_windows_cmd_admin_net_start_stop.png)
846 Now that you've successfully installed a Windows client, please proceed to
847 the [detailed configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
851 > The certificate location changed in v2.8 to `%ProgramData%\var\lib\icinga2\certs`.
852 > Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
855 ## Configuration Modes <a id="distributed-monitoring-configuration-modes"></a>
857 There are different ways to ensure that the Icinga 2 cluster nodes execute
858 checks, send notifications, etc.
860 The preferred method is to configure monitoring objects on the master
861 and distribute the configuration to satellites and clients.
863 The following chapters will explain this in detail with hands-on manual configuration
864 examples. You should test and implement this once to fully understand how it works.
866 Once you are familiar with Icinga 2 and distributed monitoring, you
867 can start with additional integrations to manage and deploy your
870 * [Icinga Director](https://github.com/icinga/icingaweb2-module-director) provides a web interface to manage configuration and also allows to sync imported resources (CMDB, PuppetDB, etc.)
871 * [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
872 * [Puppet Module](https://github.com/Icinga/puppet-icinga2)
873 * [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
875 More details can be found [here](13-addons.md#configuration-tools).
877 ### Top Down <a id="distributed-monitoring-top-down"></a>
879 There are two different behaviors with check execution:
881 * Send a command execution event remotely: The scheduler still runs on the parent node.
882 * Sync the host/service objects directly to the child node: Checks are executed locally.
884 Again, technically it does not matter whether this is a `client` or a `satellite`
885 which is receiving configuration or command execution events.
887 ### Top Down Command Endpoint <a id="distributed-monitoring-top-down-command-endpoint"></a>
889 This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
890 The host/service object configuration is located on the master/satellite and the client only
891 needs the CheckCommand object definitions being used there.
893 Every endpoint has its own remote check queue. The amount of checks executed simultaneously
894 can be limited on the endpoint with the `MaxConcurrentChecks` constant defined in [constants.conf](04-configuring-icinga-2.md#constants-conf). Icinga 2 may discard check requests,
895 if the remote check queue is full.
897 ![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
901 * No local checks need to be defined on the child node (client).
902 * Light-weight remote check execution (asynchronous events).
903 * No [replay log](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-command-endpoint-log-duration) is necessary for the child node.
904 * Pin checks to specific endpoints (if the child zone consists of 2 endpoints).
908 * If the child node is not connected, no more checks are executed.
909 * Requires additional configuration attribute specified in host/service objects.
910 * Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
912 To make sure that all nodes involved will accept configuration and/or
913 commands, you need to configure the `Zone` and `Endpoint` hierarchy
916 * `icinga2-master1.localdomain` is the configuration master in this scenario.
917 * `icinga2-client1.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition, it receives the global check command configuration from the master.
919 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
921 The endpoint configuration could look like this, for example:
923 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
925 object Endpoint "icinga2-master1.localdomain" {
926 host = "192.168.56.101"
929 object Endpoint "icinga2-client1.localdomain" {
930 host = "192.168.56.111"
933 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
935 **Note**: Each client requires its own zone and endpoint configuration. Best practice
936 is to use the client's FQDN for all object names.
938 The `master` zone is a parent of the `icinga2-client1.localdomain` zone:
940 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
942 object Zone "master" {
943 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
946 object Zone "icinga2-client1.localdomain" {
947 endpoints = [ "icinga2-client1.localdomain" ]
949 parent = "master" //establish zone hierarchy
952 In addition, add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
953 for syncing check commands later:
956 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
958 object Zone "global-templates" {
965 > Packages >= 2.8 provide this configuration by default.
967 You don't need any local configuration on the client except for
968 CheckCommand definitions which can be synced using the global zone
969 above. Therefore disable the inclusion of the `conf.d` directory
970 in `/etc/icinga2/icinga2.conf`.
973 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/icinga2.conf
975 // Commented out, not required on a client as command endpoint
976 //include_recursive "conf.d"
981 > Packages >= 2.9 provide an option in the setup wizard to disable this.
982 > Defaults to disabled.
984 Edit the `api` feature on the client `icinga2-client1.localdomain` in
985 the `/etc/icinga2/features-enabled/api.conf` file and make sure to set
986 `accept_commands` and `accept_config` to `true`:
988 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
990 object ApiListener "api" {
992 accept_commands = true
996 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1001 [root@icinga2-client1.localdomain /]# icinga2 daemon -C
1002 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
1004 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1005 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1007 Once the clients have successfully connected, you are ready for the next step: **execute
1008 a remote check on the client using the command endpoint**.
1010 Include the host and service object configuration in the `master` zone
1011 -- this will help adding a secondary master for high-availability later.
1013 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1015 Add the host and service objects you want to monitor. There is
1016 no limitation for files and directories -- best practice is to
1017 sort things by type.
1019 By convention a master/satellite/client host object should use the same name as the endpoint object.
1020 You can also add multiple hosts which execute checks against remote services/clients.
1022 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1023 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1025 object Host "icinga2-client1.localdomain" {
1026 check_command = "hostalive" //check is executed on the master
1027 address = "192.168.56.111"
1029 vars.client_endpoint = name //follows the convention that host name == endpoint name
1032 Given that you are monitoring a Linux client, we'll add a remote [disk](10-icinga-template-library.md#plugin-check-command-disk)
1035 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1037 apply Service "disk" {
1038 check_command = "disk"
1040 //specify where the check is executed
1041 command_endpoint = host.vars.client_endpoint
1043 assign where host.vars.client_endpoint
1046 If you have your own custom `CheckCommand` definition, add it to the global zone:
1048 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1049 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1051 object CheckCommand "my-cmd" {
1055 Save the changes and validate the configuration on the master node:
1057 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1059 Restart the Icinga 2 daemon (example for CentOS 7):
1061 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1063 The following steps will happen:
1065 * Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
1066 * The `icinga2-master1.localdomain` node schedules and executes the checks.
1067 * The `icinga2-client1.localdomain` node receives the execute command event with additional command parameters.
1068 * The `icinga2-client1.localdomain` node maps the command parameters to the local check command, executes the check locally, and sends back the check result message.
1070 As you can see, no interaction from your side is required on the client itself, and it's not necessary to reload the Icinga 2 service on the client.
1072 You have learned the basics about command endpoint checks. Proceed with
1073 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1074 section where you can find detailed information on extending the setup.
1077 ### Top Down Config Sync <a id="distributed-monitoring-top-down-config-sync"></a>
1079 This mode syncs the object configuration files within specified zones.
1080 It comes in handy if you want to configure everything on the master node
1081 and sync the satellite checks (disk, memory, etc.). The satellites run their
1082 own local scheduler and will send the check result messages back to the master.
1084 ![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
1088 * Sync the configuration files from the parent zone to the child zones.
1089 * No manual restart is required on the child nodes, as syncing, validation, and restarts happen automatically.
1090 * Execute checks directly on the child node's scheduler.
1091 * Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
1092 * Use a global zone for syncing templates, groups, etc.
1096 * Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
1097 * Additional zone and endpoint configuration needed.
1098 * Replay log is replicated on reconnect after connection loss. This might increase the data transfer and create an overload on the connection.
1100 To make sure that all involved nodes accept configuration and/or
1101 commands, you need to configure the `Zone` and `Endpoint` hierarchy
1104 * `icinga2-master1.localdomain` is the configuration master in this scenario.
1105 * `icinga2-client2.localdomain` acts as client which receives configuration from the master. Checks are scheduled locally.
1107 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
1109 The endpoint configuration could look like this:
1111 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1113 object Endpoint "icinga2-master1.localdomain" {
1114 host = "192.168.56.101"
1117 object Endpoint "icinga2-client2.localdomain" {
1118 host = "192.168.56.112"
1121 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
1123 **Note**: Each client requires its own zone and endpoint configuration. Best practice
1124 is to use the client's FQDN for all object names.
1126 The `master` zone is a parent of the `icinga2-client2.localdomain` zone:
1128 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1130 object Zone "master" {
1131 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
1134 object Zone "icinga2-client2.localdomain" {
1135 endpoints = [ "icinga2-client2.localdomain" ]
1137 parent = "master" //establish zone hierarchy
1140 Edit the `api` feature on the client `icinga2-client2.localdomain` in
1141 the `/etc/icinga2/features-enabled/api.conf` file and set
1142 `accept_config` to `true`.
1144 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
1146 object ApiListener "api" {
1148 accept_config = true
1151 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1154 Example on CentOS 7:
1156 [root@icinga2-client2.localdomain /]# icinga2 daemon -C
1157 [root@icinga2-client2.localdomain /]# systemctl restart icinga2
1159 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1160 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1163 **Tip**: Best practice is to use a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
1164 for common configuration items (check commands, templates, groups, etc.).
1166 Once the clients have connected successfully, it's time for the next step: **execute
1167 a local check on the client using the configuration sync**.
1169 Navigate to `/etc/icinga2/zones.d` on your master node
1170 `icinga2-master1.localdomain` and create a new directory with the same
1171 name as your satellite/client zone name:
1173 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/icinga2-client2.localdomain
1175 Add the host and service objects you want to monitor. There is
1176 no limitation for files and directories -- best practice is to
1177 sort things by type.
1179 By convention a master/satellite/client host object should use the same name as the endpoint object.
1180 You can also add multiple hosts which execute checks against remote services/clients.
1182 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/icinga2-client2.localdomain
1183 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim hosts.conf
1185 object Host "icinga2-client2.localdomain" {
1186 check_command = "hostalive"
1187 address = "192.168.56.112"
1188 zone = "master" //optional trick: sync the required host object to the client, but enforce the "master" zone to execute the check
1191 Given that you are monitoring a Linux client we'll just add a local [disk](10-icinga-template-library.md#plugin-check-command-disk)
1194 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim services.conf
1196 object Service "disk" {
1197 host_name = "icinga2-client2.localdomain"
1199 check_command = "disk"
1202 Save the changes and validate the configuration on the master node:
1204 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1206 Restart the Icinga 2 daemon (example for CentOS 7):
1208 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1210 The following steps will happen:
1212 * Icinga 2 validates the configuration on `icinga2-master1.localdomain`.
1213 * Icinga 2 copies the configuration into its zone config store in `/var/lib/icinga2/api/zones`.
1214 * The `icinga2-master1.localdomain` node sends a config update event to all endpoints in the same or direct child zones.
1215 * The `icinga2-client2.localdomain` node accepts config and populates the local zone config store with the received config files.
1216 * The `icinga2-client2.localdomain` node validates the configuration and automatically restarts.
1218 Again, there is no interaction required on the client
1221 You can also use the config sync inside a high-availability zone to
1222 ensure that all config objects are synced among zone members.
1224 **Note**: You can only have one so-called "config master" in a zone which stores
1225 the configuration in the `zones.d` directory.
1226 Multiple nodes with configuration files in the `zones.d` directory are
1229 Now that you've learned the basics about the configuration sync, proceed with
1230 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1231 section where you can find detailed information on extending the setup.
1235 If you are eager to start fresh instead you might take a look into the
1236 [Icinga Director](https://github.com/icinga/icingaweb2-module-director).
1238 ## Scenarios <a id="distributed-monitoring-scenarios"></a>
1240 The following examples should give you an idea on how to build your own
1241 distributed monitoring environment. We've seen them all in production
1242 environments and received feedback from our [community](https://www.icinga.com/community/get-involved/)
1243 and [partner support](https://www.icinga.com/services/support/) channels:
1245 * Single master with clients.
1246 * HA master with clients as command endpoint.
1247 * Three level cluster with config HA masters, satellites receiving config sync, and clients checked using command endpoint.
1249 ### Master with Clients <a id="distributed-monitoring-master-clients"></a>
1251 ![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
1253 * `icinga2-master1.localdomain` is the primary master node.
1254 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1258 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1259 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1261 Edit the `zones.conf` configuration file on the master:
1263 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1265 object Endpoint "icinga2-master1.localdomain" {
1268 object Endpoint "icinga2-client1.localdomain" {
1269 host = "192.168.56.111" //the master actively tries to connect to the client
1272 object Endpoint "icinga2-client2.localdomain" {
1273 host = "192.168.56.112" //the master actively tries to connect to the client
1276 object Zone "master" {
1277 endpoints = [ "icinga2-master1.localdomain" ]
1280 object Zone "icinga2-client1.localdomain" {
1281 endpoints = [ "icinga2-client1.localdomain" ]
1286 object Zone "icinga2-client2.localdomain" {
1287 endpoints = [ "icinga2-client2.localdomain" ]
1292 /* sync global commands */
1293 object Zone "global-templates" {
1297 The two client nodes do not necessarily need to know about each other. The only important thing
1298 is that they know about the parent zone and their endpoint members (and optionally the global zone).
1300 If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object,
1301 the client will actively try to connect to the master node. Since we've specified the client
1302 endpoint's attribute on the master node already, we don't want the clients to connect to the
1303 master. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1305 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1307 object Endpoint "icinga2-master1.localdomain" {
1308 //do not actively connect to the master by leaving out the 'host' attribute
1311 object Endpoint "icinga2-client1.localdomain" {
1314 object Zone "master" {
1315 endpoints = [ "icinga2-master1.localdomain" ]
1318 object Zone "icinga2-client1.localdomain" {
1319 endpoints = [ "icinga2-client1.localdomain" ]
1324 /* sync global commands */
1325 object Zone "global-templates" {
1329 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1331 object Endpoint "icinga2-master1.localdomain" {
1332 //do not actively connect to the master by leaving out the 'host' attribute
1335 object Endpoint "icinga2-client2.localdomain" {
1338 object Zone "master" {
1339 endpoints = [ "icinga2-master1.localdomain" ]
1342 object Zone "icinga2-client2.localdomain" {
1343 endpoints = [ "icinga2-client2.localdomain" ]
1348 /* sync global commands */
1349 object Zone "global-templates" {
1353 Now it is time to define the two client hosts and apply service checks using
1354 the command endpoint execution method on them. Note: You can also use the
1355 config sync mode here.
1357 Create a new configuration directory on the master node:
1359 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1361 Add the two client nodes as host objects:
1363 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1364 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1366 object Host "icinga2-client1.localdomain" {
1367 check_command = "hostalive"
1368 address = "192.168.56.111"
1369 vars.client_endpoint = name //follows the convention that host name == endpoint name
1372 object Host "icinga2-client2.localdomain" {
1373 check_command = "hostalive"
1374 address = "192.168.56.112"
1375 vars.client_endpoint = name //follows the convention that host name == endpoint name
1378 Add services using command endpoint checks:
1380 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1382 apply Service "ping4" {
1383 check_command = "ping4"
1384 //check is executed on the master node
1385 assign where host.address
1388 apply Service "disk" {
1389 check_command = "disk"
1391 //specify where the check is executed
1392 command_endpoint = host.vars.client_endpoint
1394 assign where host.vars.client_endpoint
1397 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1399 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1400 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1402 Open Icinga Web 2 and check the two newly created client hosts with two new services
1403 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1405 ### High-Availability Master with Clients <a id="distributed-monitoring-scenarios-ha-master-clients"></a>
1407 ![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
1409 This scenario is similar to the one in the [previous section](06-distributed-monitoring.md#distributed-monitoring-master-clients). The only difference is that we will now set up two master nodes in a high-availability setup.
1410 These nodes must be configured as zone and endpoints objects.
1412 The setup uses the capabilities of the Icinga 2 cluster. All zone members
1413 replicate cluster events amongst each other. In addition to that, several Icinga 2
1414 features can enable HA functionality.
1416 **Note**: All nodes in the same zone require that you enable the same features for high-availability (HA).
1420 * `icinga2-master1.localdomain` is the config master master node.
1421 * `icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`.
1422 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1426 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1427 * Set up `icinga2-master2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1428 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, set to `y` and add the secondary master `icinga2-master2.localdomain`).
1430 In case you don't want to use the CLI commands, you can also manually create and sync the
1431 required SSL certificates. We will modify and discuss all the details of the automatically generated configuration here.
1433 Since there are now two nodes in the same zone, we must consider the
1434 [high-availability features](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1436 * Checks and notifications are balanced between the two master nodes. That's fine, but it requires check plugins and notification scripts to exist on both nodes.
1437 * The IDO feature will only be active on one node by default. Since all events are replicated between both nodes, it is easier to just have one central database.
1439 One possibility is to use a dedicated MySQL cluster VIP (external application cluster)
1440 and leave the IDO feature with enabled HA capabilities. Alternatively,
1441 you can disable the HA feature and write to a local database on each node.
1442 Both methods require that you configure Icinga Web 2 accordingly (monitoring
1443 backend, IDO database, used transports, etc.).
1445 The zone hierarchy could look like this. It involves putting the two master nodes
1446 `icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
1448 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1450 object Endpoint "icinga2-master1.localdomain" {
1451 host = "192.168.56.101"
1454 object Endpoint "icinga2-master2.localdomain" {
1455 host = "192.168.56.102"
1458 object Endpoint "icinga2-client1.localdomain" {
1459 host = "192.168.56.111" //the master actively tries to connect to the client
1462 object Endpoint "icinga2-client2.localdomain" {
1463 host = "192.168.56.112" //the master actively tries to connect to the client
1466 object Zone "master" {
1467 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1470 object Zone "icinga2-client1.localdomain" {
1471 endpoints = [ "icinga2-client1.localdomain" ]
1476 object Zone "icinga2-client2.localdomain" {
1477 endpoints = [ "icinga2-client2.localdomain" ]
1482 /* sync global commands */
1483 object Zone "global-templates" {
1487 The two client nodes do not necessarily need to know about each other. The only important thing
1488 is that they know about the parent zone and their endpoint members (and optionally about the global zone).
1490 If you specify the `host` attribute in the `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1491 endpoint objects, the client will actively try to connect to the master node. Since we've specified the client
1492 endpoint's attribute on the master node already, we don't want the clients to connect to the
1493 master nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1495 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1497 object Endpoint "icinga2-master1.localdomain" {
1498 //do not actively connect to the master by leaving out the 'host' attribute
1501 object Endpoint "icinga2-master2.localdomain" {
1502 //do not actively connect to the master by leaving out the 'host' attribute
1505 object Endpoint "icinga2-client1.localdomain" {
1508 object Zone "master" {
1509 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1512 object Zone "icinga2-client1.localdomain" {
1513 endpoints = [ "icinga2-client1.localdomain" ]
1518 /* sync global commands */
1519 object Zone "global-templates" {
1523 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1525 object Endpoint "icinga2-master1.localdomain" {
1526 //do not actively connect to the master by leaving out the 'host' attribute
1529 object Endpoint "icinga2-master2.localdomain" {
1530 //do not actively connect to the master by leaving out the 'host' attribute
1533 object Endpoint "icinga2-client2.localdomain" {
1536 object Zone "master" {
1537 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1540 object Zone "icinga2-client2.localdomain" {
1541 endpoints = [ "icinga2-client2.localdomain" ]
1546 /* sync global commands */
1547 object Zone "global-templates" {
1551 Now it is time to define the two client hosts and apply service checks using
1552 the command endpoint execution method. Note: You can also use the
1553 config sync mode here.
1555 Create a new configuration directory on the master node `icinga2-master1.localdomain`.
1556 **Note**: The secondary master node `icinga2-master2.localdomain` receives the
1557 configuration using the [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync).
1559 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1561 Add the two client nodes as host objects:
1563 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1564 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1566 object Host "icinga2-client1.localdomain" {
1567 check_command = "hostalive"
1568 address = "192.168.56.111"
1569 vars.client_endpoint = name //follows the convention that host name == endpoint name
1572 object Host "icinga2-client2.localdomain" {
1573 check_command = "hostalive"
1574 address = "192.168.56.112"
1575 vars.client_endpoint = name //follows the convention that host name == endpoint name
1578 Add services using command endpoint checks:
1580 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1582 apply Service "ping4" {
1583 check_command = "ping4"
1584 //check is executed on the master node
1585 assign where host.address
1588 apply Service "disk" {
1589 check_command = "disk"
1591 //specify where the check is executed
1592 command_endpoint = host.vars.client_endpoint
1594 assign where host.vars.client_endpoint
1597 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1599 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1600 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1602 Open Icinga Web 2 and check the two newly created client hosts with two new services
1603 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1605 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1606 to make sure that your cluster notifies you in case of failure.
1609 ### Three Levels with Master, Satellites, and Clients <a id="distributed-monitoring-scenarios-master-satellite-client"></a>
1611 ![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
1613 This scenario combines everything you've learned so far: High-availability masters,
1614 satellites receiving their configuration from the master zone, and clients checked via command
1615 endpoint from the satellite zones.
1617 **Tip**: It can get complicated, so grab a pen and paper and bring your thoughts to life.
1618 Play around with a test setup before using it in a production environment!
1622 * `icinga2-master1.localdomain` is the configuration master master node.
1623 * `icinga2-master2.localdomain` is the secondary master master node without configuration in `zones.d`.
1624 * `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone.
1625 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1629 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1630 * Set up `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1631 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1633 When being asked for the master endpoint providing CSR auto-signing capabilities,
1634 please add the master node which holds the CA and has the `ApiListener` feature configured and enabled.
1635 The parent endpoint must still remain the satellite endpoint name.
1637 Example for `icinga2-client1.localdomain`:
1639 Please specify the master endpoint(s) this node should connect to:
1641 Master is the first satellite `icinga2-satellite1.localdomain`:
1643 Master Common Name (CN from your master setup): icinga2-satellite1.localdomain
1644 Do you want to establish a connection to the master from this node? [Y/n]: y
1645 Please fill out the master connection information:
1646 Master endpoint host (Your master's IP address or FQDN): 192.168.56.105
1647 Master endpoint port [5665]:
1649 Add the second satellite `icinga2-satellite2.localdomain` as master:
1651 Add more master endpoints? [y/N]: y
1652 Master Common Name (CN from your master setup): icinga2-satellite2.localdomain
1653 Do you want to establish a connection to the master from this node? [Y/n]: y
1654 Please fill out the master connection information:
1655 Master endpoint host (Your master's IP address or FQDN): 192.168.56.106
1656 Master endpoint port [5665]:
1657 Add more master endpoints? [y/N]: n
1659 Specify the master node `icinga2-master2.localdomain` with the CA private key and ticket salt:
1661 Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
1662 Host [192.168.56.106]: icinga2-master1.localdomain
1665 In case you cannot connect to the master node from your clients, you'll manually need
1666 to [generate the SSL certificates](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
1667 and modify the configuration accordingly.
1669 We'll discuss the details of the required configuration below.
1671 The zone hierarchy can look like this. We'll define only the directly connected zones here.
1673 You can safely deploy this configuration onto all master and satellite zone
1674 members. You should keep in mind to control the endpoint [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction)
1675 using the `host` attribute.
1677 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1679 object Endpoint "icinga2-master1.localdomain" {
1680 host = "192.168.56.101"
1683 object Endpoint "icinga2-master2.localdomain" {
1684 host = "192.168.56.102"
1687 object Endpoint "icinga2-satellite1.localdomain" {
1688 host = "192.168.56.105"
1691 object Endpoint "icinga2-satellite2.localdomain" {
1692 host = "192.168.56.106"
1695 object Zone "master" {
1696 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1699 object Zone "satellite" {
1700 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1705 /* sync global commands */
1706 object Zone "global-templates" {
1710 Repeat the configuration step for `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain`
1711 and `icinga2-satellite2.localdomain`.
1713 Since we want to use [top down command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) checks,
1714 we must configure the client endpoint and zone objects.
1715 In order to minimize the effort, we'll sync the client zone and endpoint configuration to the
1716 satellites where the connection information is needed as well.
1718 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/{master,satellite,global-templates}
1719 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1721 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1723 object Endpoint "icinga2-client1.localdomain" {
1724 host = "192.168.56.111" //the satellite actively tries to connect to the client
1727 object Zone "icinga2-client1.localdomain" {
1728 endpoints = [ "icinga2-client1.localdomain" ]
1730 parent = "satellite"
1733 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1735 object Endpoint "icinga2-client2.localdomain" {
1736 host = "192.168.56.112" //the satellite actively tries to connect to the client
1739 object Zone "icinga2-client2.localdomain" {
1740 endpoints = [ "icinga2-client2.localdomain" ]
1742 parent = "satellite"
1745 The two client nodes do not necessarily need to know about each other, either. The only important thing
1746 is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
1748 If you specify the `host` attribute in the `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain`
1749 endpoint objects, the client node will actively try to connect to the satellite node. Since we've specified the client
1750 endpoint's attribute on the satellite node already, we don't want the client node to connect to the
1751 satellite nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1753 Example for `icinga2-client1.localdomain`:
1755 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1757 object Endpoint "icinga2-satellite1.localdomain" {
1758 //do not actively connect to the satellite by leaving out the 'host' attribute
1761 object Endpoint "icinga2-satellite2.localdomain" {
1762 //do not actively connect to the satellite by leaving out the 'host' attribute
1765 object Endpoint "icinga2-client1.localdomain" {
1769 object Zone "satellite" {
1770 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1773 object Zone "icinga2-client1.localdomain" {
1774 endpoints = [ "icinga2-client1.localdomain" ]
1776 parent = "satellite"
1779 /* sync global commands */
1780 object Zone "global-templates" {
1784 Example for `icinga2-client2.localdomain`:
1786 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1788 object Endpoint "icinga2-satellite1.localdomain" {
1789 //do not actively connect to the satellite by leaving out the 'host' attribute
1792 object Endpoint "icinga2-satellite2.localdomain" {
1793 //do not actively connect to the satellite by leaving out the 'host' attribute
1796 object Endpoint "icinga2-client2.localdomain" {
1800 object Zone "satellite" {
1801 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1804 object Zone "icinga2-client2.localdomain" {
1805 endpoints = [ "icinga2-client2.localdomain" ]
1807 parent = "satellite"
1810 /* sync global commands */
1811 object Zone "global-templates" {
1815 Now it is time to define the two client hosts on the master, sync them to the satellites
1816 and apply service checks using the command endpoint execution method to them.
1817 Add the two client nodes as host objects to the `satellite` zone.
1819 We've already created the directories in `/etc/icinga2/zones.d` including the files for the
1820 zone and endpoint configuration for the clients.
1822 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1824 Add the host object configuration for the `icinga2-client1.localdomain` client. You should
1825 have created the configuration file in the previous steps and it should contain the endpoint
1826 and zone object configuration already.
1828 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1830 object Host "icinga2-client1.localdomain" {
1831 check_command = "hostalive"
1832 address = "192.168.56.111"
1833 vars.client_endpoint = name //follows the convention that host name == endpoint name
1836 Add the host object configuration for the `icinga2-client2.localdomain` client configuration file:
1838 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1840 object Host "icinga2-client2.localdomain" {
1841 check_command = "hostalive"
1842 address = "192.168.56.112"
1843 vars.client_endpoint = name //follows the convention that host name == endpoint name
1846 Add a service object which is executed on the satellite nodes (e.g. `ping4`). Pin the apply rule to the `satellite` zone only.
1848 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1850 apply Service "ping4" {
1851 check_command = "ping4"
1852 //check is executed on the satellite node
1853 assign where host.zone == "satellite" && host.address
1856 Add services using command endpoint checks. Pin the apply rules to the `satellite` zone only.
1858 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1860 apply Service "disk" {
1861 check_command = "disk"
1863 //specify where the check is executed
1864 command_endpoint = host.vars.client_endpoint
1866 assign where host.zone == "satellite" && host.vars.client_endpoint
1869 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1871 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1872 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1874 Open Icinga Web 2 and check the two newly created client hosts with two new services
1875 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1877 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1878 to make sure that your cluster notifies you in case of failure.
1880 ## Best Practice <a id="distributed-monitoring-best-practice"></a>
1882 We've put together a collection of configuration examples from community feedback.
1883 If you like to share your tips and tricks with us, please join the [community channels](https://www.icinga.com/community/get-involved/)!
1885 ### Global Zone for Config Sync <a id="distributed-monitoring-global-zone-config-sync"></a>
1887 Global zones can be used to sync generic configuration objects
1888 to all nodes depending on them. Common examples are:
1890 * Templates which are imported into zone specific objects.
1891 * Command objects referenced by Host, Service, Notification objects.
1892 * Apply rules for services, notifications and dependencies.
1893 * User objects referenced in notifications.
1895 * TimePeriod objects.
1897 Plugin scripts and binaries cannot be synced, this is for Icinga 2
1898 configuration files only. Use your preferred package repository
1899 and/or configuration management tool (Puppet, Ansible, Chef, etc.)
1902 **Note**: Checkable objects (hosts and services) cannot be put into a global
1903 zone. The configuration validation will terminate with an error.
1905 The zone object configuration must be deployed on all nodes which should receive
1906 the global configuration files:
1908 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1910 object Zone "global-templates" {
1914 Note: Packages >= 2.8 provide this configuration by default.
1916 Similar to the zone configuration sync you'll need to create a new directory in
1917 `/etc/icinga2/zones.d`:
1919 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1921 Next, add a new check command, for example:
1923 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1925 object CheckCommand "my-cmd" {
1929 Restart the client(s) which should receive the global zone before
1930 before restarting the parent master/satellite nodes.
1932 Then validate the configuration on the master node and restart Icinga 2.
1934 **Tip**: You can copy the example configuration files located in `/etc/icinga2/conf.d`
1935 into your global zone.
1939 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/conf.d
1940 [root@icinga2-master1.localdomain /etc/icinga2/conf.d]# cp {commands,groups,notifications,services,templates,timeperiods,users}.conf /etc/icinga2/zones.d/global-templates
1942 ### Health Checks <a id="distributed-monitoring-health-checks"></a>
1944 In case of network failures or other problems, your monitoring might
1945 either have late check results or just send out mass alarms for unknown
1948 In order to minimize the problems caused by this, you should configure
1949 additional health checks.
1951 The `cluster` check, for example, will check if all endpoints in the current zone and the directly
1952 connected zones are working properly:
1954 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1955 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
1957 object Host "icinga2-master1.localdomain" {
1958 check_command = "hostalive"
1959 address = "192.168.56.101"
1962 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1964 object Service "cluster" {
1965 check_command = "cluster"
1969 host_name = "icinga2-master1.localdomain"
1972 The `cluster-zone` check will test whether the configured target zone is currently
1973 connected or not. This example adds a health check for the [ha master with clients scenario](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
1975 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
1977 apply Service "cluster-health" {
1978 check_command = "cluster-zone"
1980 display_name = "cluster-health-" + host.name
1982 /* This follows the convention that the client zone name is the FQDN which is the same as the host object name. */
1983 vars.cluster_zone = host.name
1985 assign where host.vars.client_endpoint
1988 In case you cannot assign the `cluster_zone` attribute, add specific
1989 checks to your cluster:
1991 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1993 object Service "cluster-zone-satellite" {
1994 check_command = "cluster-zone"
1997 vars.cluster_zone = "satellite"
1999 host_name = "icinga2-master1.localdomain"
2003 If you are using top down checks with command endpoint configuration, you can
2004 add a dependency which prevents notifications for all other failing services:
2006 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/dependencies.conf
2008 apply Dependency "health-check" to Service {
2009 parent_service_name = "child-health"
2012 disable_notifications = true
2014 assign where host.vars.client_endpoint
2015 ignore where service.name == "child-health"
2018 ### Pin Checks in a Zone <a id="distributed-monitoring-pin-checks-zone"></a>
2020 In case you want to pin specific checks to their endpoints in a given zone you'll need to use
2021 the `command_endpoint` attribute. This is reasonable if you want to
2022 execute a local disk check in the `master` Zone on a specific endpoint then.
2024 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2025 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2027 object Host "icinga2-master1.localdomain" {
2028 check_command = "hostalive"
2029 address = "192.168.56.101"
2032 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2034 apply Service "disk" {
2035 check_command = "disk"
2037 command_endpoint = host.name //requires a host object matching the endpoint object name e.g. icinga2-master1.localdomain
2039 assign where host.zone == "master" && match("icinga2-master*", host.name)
2042 The `host.zone` attribute check inside the expression ensures that
2043 the service object is only created for host objects inside the `master`
2044 zone. In addition to that the [match](18-library-reference.md#global-functions-match)
2045 function ensures to only create services for the master nodes.
2047 ### Windows Firewall <a id="distributed-monitoring-windows-firewall"></a>
2049 #### ICMP Requests <a id="distributed-monitoring-windows-firewall-icmp"></a>
2051 By default ICMP requests are disabled in the Windows firewall. You can
2052 change that by [adding a new rule](https://support.microsoft.com/en-us/kb/947709).
2054 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
2056 #### Icinga 2 <a id="distributed-monitoring-windows-firewall-icinga2"></a>
2058 If your master/satellite nodes should actively connect to the Windows client
2059 you'll also need to ensure that port `5665` is enabled.
2061 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 5665 (Icinga 2)" dir=in action=allow protocol=TCP localport=5665
2063 #### NSClient++ API <a id="distributed-monitoring-windows-firewall-nsclient-api"></a>
2065 If the [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2066 plugin is used to query NSClient++, you need to ensure that its port is enabled.
2068 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 8443 (NSClient++ API)" dir=in action=allow protocol=TCP localport=8443
2070 For security reasons, it is advised to enable the NSClient++ HTTP API for local
2071 connection from the Icinga 2 client only. Remote connections to the HTTP API
2072 are not recommended with using the legacy HTTP API.
2074 ### Windows Client and Plugins <a id="distributed-monitoring-windows-plugins"></a>
2076 The Icinga 2 package on Windows already provides several plugins.
2077 Detailed [documentation](10-icinga-template-library.md#windows-plugins) is available for all check command definitions.
2079 Add the following `include` statement on all your nodes (master, satellite, client):
2081 vim /etc/icinga2/icinga2.conf
2083 include <windows-plugins>
2085 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2086 scenario we'll now add a local disk check.
2088 First, add the client node as host object:
2090 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2091 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2093 object Host "icinga2-client2.localdomain" {
2094 check_command = "hostalive"
2095 address = "192.168.56.112"
2096 vars.client_endpoint = name //follows the convention that host name == endpoint name
2097 vars.os_type = "windows"
2100 Next, add the disk check using command endpoint checks (details in the
2101 [disk-windows](10-icinga-template-library.md#windows-plugins-disk-windows) documentation):
2103 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2105 apply Service "disk C:" {
2106 check_command = "disk-windows"
2108 vars.disk_win_path = "C:"
2110 //specify where the check is executed
2111 command_endpoint = host.vars.client_endpoint
2113 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2116 Validate the configuration and restart Icinga 2.
2118 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2119 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2121 Open Icinga Web 2 and check your newly added Windows disk check :)
2123 ![Icinga 2 Client Windows](images/distributed-monitoring/icinga2_distributed_windows_client_disk_icingaweb2.png)
2125 If you want to add your own plugins please check [this chapter](05-service-monitoring.md#service-monitoring-requirements)
2126 for the requirements.
2128 ### Windows Client and NSClient++ <a id="distributed-monitoring-windows-nscp"></a>
2130 There are two methods available for querying NSClient++:
2132 * Query the [HTTP API](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api) locally from an Icinga 2 client (requires a running NSClient++ service)
2133 * Run a [local CLI check](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-local) (does not require NSClient++ as a service)
2135 Both methods have their advantages and disadvantages. One thing to
2136 note: If you rely on performance counter delta calculations such as
2137 CPU utilization, please use the HTTP API instead of the CLI sample call.
2139 #### NSCLient++ with check_nscp_api <a id="distributed-monitoring-windows-nscp-check-api"></a>
2141 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2142 you to install the NSClient++ package. In addition to the Windows plugins you can
2143 use the [nscp_api command](10-icinga-template-library.md#nscp-check-api) provided by the Icinga Template Library (ITL).
2145 The initial setup for the NSClient++ API and the required arguments
2146 is the described in the ITL chapter for the [nscp_api](10-icinga-template-library.md#nscp-check-api) CheckCommand.
2148 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2149 scenario we'll now add a local nscp check which queries the NSClient++ API to check the free disk space.
2151 Define a host object called `icinga2-client2.localdomain` on the master. Add the `nscp_api_password`
2152 custom attribute and specify the drives to check.
2154 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2155 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2157 object Host "icinga2-client1.localdomain" {
2158 check_command = "hostalive"
2159 address = "192.168.56.111"
2160 vars.client_endpoint = name //follows the convention that host name == endpoint name
2161 vars.os_type = "Windows"
2162 vars.nscp_api_password = "icinga"
2163 vars.drives = [ "C:", "D:" ]
2166 The service checks are generated using an [apply for](03-monitoring-basics.md#using-apply-for)
2167 rule based on `host.vars.drives`:
2169 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2171 apply Service "nscp-api-" for (drive in host.vars.drives) {
2172 import "generic-service"
2174 check_command = "nscp_api"
2175 command_endpoint = host.vars.client_endpoint
2177 //display_name = "nscp-drive-" + drive
2179 vars.nscp_api_host = "localhost"
2180 vars.nscp_api_query = "check_drivesize"
2181 vars.nscp_api_password = host.vars.nscp_api_password
2182 vars.nscp_api_arguments = [ "drive=" + drive ]
2184 ignore where host.vars.os_type != "Windows"
2187 Validate the configuration and restart Icinga 2.
2189 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2190 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2192 Two new services ("nscp-drive-D:" and "nscp-drive-C:") will be visible in Icinga Web 2.
2194 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-api](images/distributed-monitoring/icinga2_distributed_windows_nscp_api_drivesize_icingaweb2.png)
2196 Note: You can also omit the `command_endpoint` configuration to execute
2197 the command on the master. This also requires a different value for `nscp_api_host`
2198 which defaults to `host.address`.
2200 //command_endpoint = host.vars.client_endpoint
2202 //vars.nscp_api_host = "localhost"
2204 You can verify the check execution by looking at the `Check Source` attribute
2205 in Icinga Web 2 or the REST API.
2207 If you want to monitor specific Windows services, you could use the following example:
2209 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2210 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2212 object Host "icinga2-client1.localdomain" {
2213 check_command = "hostalive"
2214 address = "192.168.56.111"
2215 vars.client_endpoint = name //follows the convention that host name == endpoint name
2216 vars.os_type = "Windows"
2217 vars.nscp_api_password = "icinga"
2218 vars.services = [ "Windows Update", "wscsvc" ]
2221 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2223 apply Service "nscp-api-" for (svc in host.vars.services) {
2224 import "generic-service"
2226 check_command = "nscp_api"
2227 command_endpoint = host.vars.client_endpoint
2229 //display_name = "nscp-service-" + svc
2231 vars.nscp_api_host = "localhost"
2232 vars.nscp_api_query = "check_service"
2233 vars.nscp_api_password = host.vars.nscp_api_password
2234 vars.nscp_api_arguments = [ "service=" + svc ]
2236 ignore where host.vars.os_type != "Windows"
2239 #### NSCLient++ with nscp-local <a id="distributed-monitoring-windows-nscp-check-local"></a>
2241 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2242 you to install the NSClient++ package. In addition to the Windows plugins you can
2243 use the [nscp-local commands](10-icinga-template-library.md#nscp-plugin-check-commands)
2244 provided by the Icinga Template Library (ITL).
2246 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++](images/distributed-monitoring/icinga2_distributed_windows_nscp.png)
2248 Add the following `include` statement on all your nodes (master, satellite, client):
2250 vim /etc/icinga2/icinga2.conf
2254 The CheckCommand definitions will automatically determine the installed path
2255 to the `nscp.exe` binary.
2257 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2258 scenario we'll now add a local nscp check querying a given performance counter.
2260 First, add the client node as host object:
2262 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2263 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2265 object Host "icinga2-client1.localdomain" {
2266 check_command = "hostalive"
2267 address = "192.168.56.111"
2268 vars.client_endpoint = name //follows the convention that host name == endpoint name
2269 vars.os_type = "windows"
2272 Next, add a performance counter check using command endpoint checks (details in the
2273 [nscp-local-counter](10-icinga-template-library.md#nscp-check-local-counter) documentation):
2275 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2277 apply Service "nscp-local-counter-cpu" {
2278 check_command = "nscp-local-counter"
2279 command_endpoint = host.vars.client_endpoint
2281 vars.nscp_counter_name = "\\Processor(_total)\\% Processor Time"
2282 vars.nscp_counter_perfsyntax = "Total Processor Time"
2283 vars.nscp_counter_warning = 1
2284 vars.nscp_counter_critical = 5
2286 vars.nscp_counter_showall = true
2288 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2291 Validate the configuration and restart Icinga 2.
2293 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2294 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2296 Open Icinga Web 2 and check your newly added Windows NSClient++ check :)
2298 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-local](images/distributed-monitoring/icinga2_distributed_windows_nscp_counter_icingaweb2.png)
2302 > In order to measure CPU load, you'll need a running NSClient++ service.
2303 > Therefore it is advised to use a local [nscp-api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2304 > check against its REST API.
2306 ## Advanced Hints <a id="distributed-monitoring-advanced-hints"></a>
2308 You can find additional hints in this section if you prefer to go your own route
2309 with automating setups (setup, certificates, configuration).
2311 ### Certificate Auto-Renewal <a id="distributed-monitoring-certificate-auto-renewal"></a>
2313 Icinga 2 v2.8+ adds the possibility that nodes request certificate updates
2314 on their own. If their expiration date is soon enough, they automatically
2315 renew their already signed certificate by sending a signing request to the
2318 ### High-Availability for Icinga 2 Features <a id="distributed-monitoring-high-availability-features"></a>
2320 All nodes in the same zone require that you enable the same features for high-availability (HA).
2322 By default, the following features provide advanced HA functionality:
2324 * [Checks](06-distributed-monitoring.md#distributed-monitoring-high-availability-checks) (load balanced, automated failover).
2325 * [Notifications](06-distributed-monitoring.md#distributed-monitoring-high-availability-notifications) (load balanced, automated failover).
2326 * [DB IDO](06-distributed-monitoring.md#distributed-monitoring-high-availability-db-ido) (Run-Once, automated failover).
2328 #### High-Availability with Checks <a id="distributed-monitoring-high-availability-checks"></a>
2330 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2331 have the `checker` feature enabled.
2335 # icinga2 feature enable checker
2337 All nodes in the same zone load-balance the check execution. If one instance shuts down,
2338 the other nodes will automatically take over the remaining checks.
2340 #### High-Availability with Notifications <a id="distributed-monitoring-high-availability-notifications"></a>
2342 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2343 have the `notification` feature enabled.
2347 # icinga2 feature enable notification
2349 Notifications are load-balanced amongst all nodes in a zone. By default this functionality
2351 If your nodes should send out notifications independently from any other nodes (this will cause
2352 duplicated notifications if not properly handled!), you can set `enable_ha = false`
2353 in the [NotificationComponent](09-object-types.md#objecttype-notificationcomponent) feature.
2355 #### High-Availability with DB IDO <a id="distributed-monitoring-high-availability-db-ido"></a>
2357 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2358 have the DB IDO feature enabled.
2360 Example DB IDO MySQL:
2362 # icinga2 feature enable ido-mysql
2364 By default the DB IDO feature only runs on one node. All other nodes in the same zone disable
2365 the active IDO database connection at runtime. The node with the active DB IDO connection is
2366 not necessarily the zone master.
2368 **Note**: The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
2369 for the [IdoMysqlConnection](09-object-types.md#objecttype-idomysqlconnection) or
2370 [IdoPgsqlConnection](09-object-types.md#objecttype-idopgsqlconnection) object on **all** nodes in the
2373 All endpoints will enable the DB IDO feature and connect to the configured
2374 database and dump configuration, status and historical data on their own.
2376 If the instance with the active DB IDO connection dies, the HA functionality will
2377 automatically elect a new DB IDO master.
2379 The DB IDO feature will try to determine which cluster endpoint is currently writing
2380 to the database and bail out if another endpoint is active. You can manually verify that
2381 by running the following query command:
2383 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
2384 status_update_time | endpoint_name
2385 ------------------------+---------------
2386 2016-08-15 15:52:26+02 | icinga2-master1.localdomain
2389 This is useful when the cluster connection between endpoints breaks, and prevents
2390 data duplication in split-brain-scenarios. The failover timeout can be set for the
2391 `failover_timeout` attribute, but not lower than 60 seconds.
2393 ### Endpoint Connection Direction <a id="distributed-monitoring-advanced-hints-connection-direction"></a>
2395 Nodes will attempt to connect to another node when its local [Endpoint](09-object-types.md#objecttype-endpoint) object
2396 configuration specifies a valid `host` attribute (FQDN or IP address).
2398 Example for the master node `icinga2-master1.localdomain` actively connecting
2399 to the client node `icinga2-client1.localdomain`:
2401 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2405 object Endpoint "icinga2-client1.localdomain" {
2406 host = "192.168.56.111" //the master actively tries to connect to the client
2410 Example for the client node `icinga2-client1.localdomain` not actively
2411 connecting to the master node `icinga2-master1.localdomain`:
2413 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2417 object Endpoint "icinga2-master1.localdomain" {
2418 //do not actively connect to the master by leaving out the 'host' attribute
2422 It is not necessary that both the master and the client node establish
2423 two connections to each other. Icinga 2 will only use one connection
2424 and close the second connection if established.
2426 **Tip**: Choose either to let master/satellite nodes connect to client nodes
2430 ### Disable Log Duration for Command Endpoints <a id="distributed-monitoring-advanced-hints-command-endpoint-log-duration"></a>
2432 The replay log is a built-in mechanism to ensure that nodes in a distributed setup
2433 keep the same history (check results, notifications, etc.) when nodes are temporarily
2434 disconnected and then reconnect.
2436 This functionality is not needed when a master/satellite node is sending check
2437 execution events to a client which is purely configured for [command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2440 The [Endpoint](09-object-types.md#objecttype-endpoint) object attribute `log_duration` can
2441 be lower or set to 0 to fully disable any log replay updates when the
2442 client is not connected.
2444 Configuration on the master node `icinga2-master1.localdomain`:
2446 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2450 object Endpoint "icinga2-client1.localdomain" {
2451 host = "192.168.56.111" //the master actively tries to connect to the client
2455 object Endpoint "icinga2-client2.localdomain" {
2456 host = "192.168.56.112" //the master actively tries to connect to the client
2460 Configuration on the client `icinga2-client1.localdomain`:
2462 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2466 object Endpoint "icinga2-master1.localdomain" {
2467 //do not actively connect to the master by leaving out the 'host' attribute
2471 object Endpoint "icinga2-master2.localdomain" {
2472 //do not actively connect to the master by leaving out the 'host' attribute
2476 ### CSR auto-signing with HA and multiple Level Cluster <a id="distributed-monitoring-advanced-hints-csr-autosigning-ha-satellites"></a>
2478 If you are using two masters in a High-Availability setup it can be necessary
2479 to allow both to sign requested certificates. Ensure to safely sync the following
2482 * `TicketSalt` constant in `constants.conf`.
2483 * `var/lib/icinga2/ca` directory.
2485 This also helps if you are using a [three level cluster](06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
2486 and your client nodes are not able to reach the CSR auto-signing master node(s).
2487 Make sure that the directory permissions for `/var/lib/icinga2/ca` are secure
2488 (not world readable).
2490 **Do not expose these private keys to anywhere else. This is a matter of security.**
2492 ### Manual Certificate Creation <a id="distributed-monitoring-advanced-hints-certificates-manual"></a>
2494 #### Create CA on the Master <a id="distributed-monitoring-advanced-hints-certificates-manual-ca"></a>
2496 Choose the host which should store the certificate authority (one of the master nodes).
2498 The first step is the creation of the certificate authority (CA) by running the following command
2501 [root@icinga2-master1.localdomain /root]# icinga2 pki new-ca
2503 #### Create CSR and Certificate <a id="distributed-monitoring-advanced-hints-certificates-manual-create"></a>
2505 Create a certificate signing request (CSR) for the local instance:
2508 [root@icinga2-master1.localdomain /root]# icinga2 pki new-cert --cn icinga2-master1.localdomain \
2509 --key icinga2-master1.localdomain.key \
2510 --csr icinga2-master1.localdomain.csr
2513 Sign the CSR with the previously created CA:
2516 [root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
2519 Repeat the steps for all instances in your setup.
2523 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2526 #### Copy Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-copy"></a>
2528 Copy the host's certificate files and the public CA certificate to `/var/lib/icinga2/certs`:
2531 [root@icinga2-master1.localdomain /root]# mkdir -p /var/lib/icinga2/certs
2532 [root@icinga2-master1.localdomain /root]# cp icinga2-master1.localdomain.{crt,key} /var/lib/icinga2/certs
2533 [root@icinga2-master1.localdomain /root]# cp /var/lib/icinga2/ca/ca.crt /var/lib/icinga2/certs
2536 Ensure that proper permissions are set (replace `icinga` with the Icinga 2 daemon user):
2539 [root@icinga2-master1.localdomain /root]# chown -R icinga:icinga /var/lib/icinga2/certs
2540 [root@icinga2-master1.localdomain /root]# chmod 600 /var/lib/icinga2/certs/*.key
2541 [root@icinga2-master1.localdomain /root]# chmod 644 /var/lib/icinga2/certs/*.crt
2544 The CA public and private key are stored in the `/var/lib/icinga2/ca` directory. Keep this path secure and include
2547 #### Create Multiple Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-multiple"></a>
2549 Use your preferred method to automate the certificate generation process.
2552 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do icinga2 pki new-cert --cn $node --csr $node.csr --key $node.key; done
2553 information/base: Writing private key to 'icinga2-master1.localdomain.key'.
2554 information/base: Writing certificate signing request to 'icinga2-master1.localdomain.csr'.
2555 information/base: Writing private key to 'icinga2-master2.localdomain.key'.
2556 information/base: Writing certificate signing request to 'icinga2-master2.localdomain.csr'.
2557 information/base: Writing private key to 'icinga2-satellite1.localdomain.key'.
2558 information/base: Writing certificate signing request to 'icinga2-satellite1.localdomain.csr'.
2560 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do sudo icinga2 pki sign-csr --csr $node.csr --cert $node.crt; done
2561 information/pki: Writing certificate to file 'icinga2-master1.localdomain.crt'.
2562 information/pki: Writing certificate to file 'icinga2-master2.localdomain.crt'.
2563 information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt'.
2566 Copy and move these certificates to the respective instances e.g. with SSH/SCP.
2568 ## Automation <a id="distributed-monitoring-automation"></a>
2570 These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)
2571 or custom scripts for automated setup.
2573 These are collected best practices from various community channels.
2575 * [Silent Windows setup](06-distributed-monitoring.md#distributed-monitoring-automation-windows-silent)
2576 * [Node Setup CLI command](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup) with parameters
2578 If you prefer an alternate method, we still recommend leaving all the Icinga 2 features intact (e.g. `icinga2 feature enable api`).
2579 You should also use well known and documented default configuration file locations (e.g. `zones.conf`).
2580 This will tremendously help when someone is trying to help in the [community channels](https://www.icinga.com/community/get-involved/).
2583 ### Silent Windows Setup <a id="distributed-monitoring-automation-windows-silent"></a>
2585 If you want to install the client silently/unattended, use the `/qn` modifier. The
2586 installation should not trigger a restart, but if you want to be completely sure, you can use the `/norestart` modifier.
2588 C:> msiexec /i C:\Icinga2-v2.5.0-x86.msi /qn /norestart
2590 Once the setup is completed you can use the `node setup` cli command too.
2592 ### Node Setup using CLI Parameters <a id="distributed-monitoring-automation-cli-node-setup"></a>
2594 Instead of using the `node wizard` CLI command, there is an alternative `node setup`
2595 command available which has some prerequisites.
2597 **Note**: The CLI command can be used on Linux/Unix and Windows operating systems.
2598 The graphical Windows setup wizard actively uses these CLI commands.
2600 #### Node Setup on the Master Node <a id="distributed-monitoring-automation-cli-node-setup-master"></a>
2602 In case you want to setup a master node you must add the `--master` parameter
2603 to the `node setup` CLI command. In addition to that the `--cn` can optionally
2604 be passed (defaults to the FQDN).
2606 Parameter | Description
2607 --------------------|--------------------
2608 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN. Defaults to the FQDN.
2609 Zone name | **Optional.** Specified with the `--zone` parameter. Defaults to `master`.
2610 Listen on | **Optional.** Specified with the `--listen` parameter. Syntax is `host,port`.
2611 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive and adds the `api-users.conf` file inclusion to `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2616 [root@icinga2-master1.localdomain /]# icinga2 node setup --master
2619 In case you want to bind the `ApiListener` object to a specific
2620 host/port you can specify it like this:
2622 --listen 192.68.56.101,5665
2624 In case you don't need anything in `conf.d`, use the following command line:
2627 [root@icinga2-master1.localdomain /]# icinga2 node setup --master --disable-confd
2631 #### Node Setup with Satellites/Clients <a id="distributed-monitoring-automation-cli-node-setup-satellite-client"></a>
2635 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2638 Make sure that the `/var/lib/icinga2/certs` directory exists and is owned by the `icinga`
2639 user (or the user Icinga 2 is running as).
2641 [root@icinga2-client1.localdomain /]# mkdir -p /var/lib/icinga2/certs
2642 [root@icinga2-client1.localdomain /]# chown -R icinga:icinga /var/lib/icinga2/certs
2644 First you'll need to generate a new local self-signed certificate.
2645 Pass the following details to the `pki new-cert` CLI command:
2647 Parameter | Description
2648 --------------------|--------------------
2649 Common name (CN) | **Required.** By convention this should be the host's FQDN.
2650 Client certificate files | **Required.** These generated files will be put into the specified location (--key and --file). By convention this should be using `/var/lib/icinga2/certs` as directory.
2654 [root@icinga2-client1.localdomain /]# icinga2 pki new-cert --cn icinga2-client1.localdomain \
2655 --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2656 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt
2658 Request the master certificate from the master host (`icinga2-master1.localdomain`)
2659 and store it as `trusted-master.crt`. Review it and continue.
2661 Pass the following details to the `pki save-cert` CLI command:
2663 Parameter | Description
2664 --------------------|--------------------
2665 Client certificate files | **Required.** Pass the previously generated files using the `--key` and `--cert` parameters.
2666 Trusted parent certificate | **Required.** Store the parent's certificate file. Manually verify that you're trusting it.
2667 Parent host | **Required.** FQDN or IP address of the parent host.
2671 [root@icinga2-client1.localdomain /]# icinga2 pki save-cert --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2672 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt \
2673 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
2674 --host icinga2-master1.localdomain
2676 Continue with the additional node setup step. Specify a local endpoint and zone name (`icinga2-client1.localdomain`)
2677 and set the master host (`icinga2-master1.localdomain`) as parent zone configuration. Specify the path to
2678 the previously stored trusted master certificate.
2680 Pass the following details to the `node setup` CLI command:
2682 Parameter | Description
2683 --------------------|--------------------
2684 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN.
2685 Request ticket | **Required.** Add the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing).
2686 Trusted master certificate | **Required.** Add the previously fetched trusted master certificate (this step means that you've verified its origin).
2687 Parent host | **Optional.** FQDN or IP address of the parent host. This is where the command connects for CSR signing. If not specified, you need to manually copy the parent's public CA certificate file into `/var/lib/icinga2/certs/ca.crt` in order to start Icinga 2.
2688 Parent endpoint | **Required.** Specify the parent's endpoint name.
2689 Client zone name | **Required.** Specify the client's zone name.
2690 Parent zone name | **Optional.** Specify the parent's zone name.
2691 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)).
2692 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
2693 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`.
2694 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive in `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2698 > The `master_host` parameter is deprecated and will be removed in 2.10.0. Please use `--parent_host` instead.
2700 Example for Icinga 2 v2.9:
2703 [root@icinga2-client1.localdomain /]# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
2704 --cn icinga2-client1.localdomain \
2705 --endpoint icinga2-master1.localdomain \
2706 --zone icinga2-client1.localdomain \
2707 --parent_zone master \
2708 --parent_host icinga2-master1.localdomain \
2709 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
2710 --accept-commands --accept-config \
2714 In case the client should connect to the master node, you'll
2715 need to modify the `--endpoint` parameter using the format `cn,host,port`:
2717 --endpoint icinga2-master1.localdomain,192.168.56.101,5665
2719 Specify the parent zone using the `--parent_zone` parameter. This is useful
2720 if the client connects to a satellite, not the master instance.
2722 --parent_zone satellite
2724 In case the client should know the additional global zone `linux-templates`, you'll
2725 need to set the `--global-zones` parameter.
2727 --global_zones linux-templates
2729 The `--parent-host` parameter is optional since v2.9 and allows you to perform a connection-less setup.
2730 You cannot restart Icinga 2 yet, the CLI command asked to to manually copy the parent's public CA
2731 certificate file in `/var/lib/icinga2/certs/ca.crt`. Once Icinga 2 is started, it sends
2732 a ticket signing request to the parent node. If you have provided a ticket, the master node
2733 signs the request and sends it back to the client which performs a certificate update in-memory.
2735 In case you did not provide a ticket, you need to manually sign the CSR on the master node
2736 which holds the CA's key pair.
2739 **You can find additional best practices below.**
2741 Add an additional global zone. Please note the `>>` append mode.
2743 [root@icinga2-client1.localdomain /]# cat <<EOF >>/etc/icinga2/zones.conf
2744 object Zone "global-templates" {
2749 Note: Packages >= 2.8 provide this configuration by default.
2751 If this client node is configured as [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2752 you can safely disable the `checker` feature. The `node setup` CLI command already disabled the `notification` feature.
2754 [root@icinga2-client1.localdomain /]# icinga2 feature disable checker
2756 Disable "conf.d" inclusion if this is a [top down](06-distributed-monitoring.md#distributed-monitoring-top-down)
2759 [root@icinga2-client1.localdomain /]# sed -i 's/include_recursive "conf.d"/\/\/include_recursive "conf.d"/g' /etc/icinga2/icinga2.conf
2761 **Optional**: Add an ApiUser object configuration for remote troubleshooting.
2763 [root@icinga2-client1.localdomain /]# cat <<EOF >/etc/icinga2/conf.d/api-users.conf
2764 object ApiUser "root" {
2765 password = "clientsupersecretpassword"
2770 In case you've previously disabled the "conf.d" directory only
2771 add the file file `conf.d/api-users.conf`:
2773 [root@icinga2-client1.localdomain /]# echo 'include "conf.d/api-users.conf"' >> /etc/icinga2/icinga2.conf
2775 Finally restart Icinga 2.
2777 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
2779 Your automation tool must then configure master node in the meantime.
2780 Add the global zone `global-templates` in case it did not exist.
2782 # cat <<EOF >>/etc/icinga2/zones.conf
2783 object Endpoint "icinga2-client1.localdomain" {
2784 //client connects itself
2787 object Zone "icinga2-client1.localdomain" {
2788 endpoints = [ "icinga2-client1.localdomain" ]
2792 object Zone "global-templates" {
2797 ## Using Multiple Environments <a id="distributed-monitoring-environments"></a>
2799 In some cases it can be desired to run multiple Icinga instances on the same host.
2800 Two potential scenarios include:
2802 * Different versions of the same monitoring configuration (e.g. production and testing)
2803 * Disparate sets of checks for entirely unrelated monitoring environments (e.g. infrastructure and applications)
2805 The configuration is done with the global constants `ApiBindHost` and `ApiBindPort`
2806 or the `bind_host` and `bind_port` attributes of the
2807 [ApiListener](09-object-types.md#objecttype-apilistener) object.
2809 The environment must be set with the global constant `Environment` or as object attribute
2810 of the [IcingaApplication](09-object-types.md#objecttype-icingaapplication) object.
2812 In any case the constant is default value for the attribute and the direct configuration in the objects
2813 have more precedence. The constants have been added to allow the values being set from the CLI on startup.
2815 When Icinga establishes a TLS connection to another cluster instance it automatically uses the [SNI extension](https://en.wikipedia.org/wiki/Server_Name_Indication)
2816 to signal which endpoint it is attempting to connect to. On its own this can already be used to position multiple
2817 Icinga instances behind a load balancer.
2819 SNI example: `icinga2-client1.localdomain`
2821 However, if the environment is configured to `production`, Icinga appends the environment name to the SNI hostname like this:
2823 SNI example with environment: `icinga2-client1.localdomain:production`
2825 Middleware like loadbalancers or TLS proxies can read the SNI header and route the connection to the appropriate target.
2826 I.e., it uses a single externally-visible TCP port (usually 5665) and forwards connections to one or more Icinga
2827 instances which are bound to a local TCP port. It does so by inspecting the environment name that is sent as part of the