1 # Distributed Monitoring with Master, Satellites, and Clients <a id="distributed-monitoring"></a>
3 This chapter will guide you through the setup of a distributed monitoring
4 environment, including high-availability clustering and setup details
5 for the Icinga 2 client.
7 ## Roles: Master, Satellites, and Clients <a id="distributed-monitoring-roles"></a>
9 Icinga 2 nodes can be given names for easier understanding:
11 * A `master` node which is on top of the hierarchy.
12 * A `satellite` node which is a child of a `satellite` or `master` node.
13 * A `client` node which works as an `agent` connected to `master` and/or `satellite` nodes.
15 ![Icinga 2 Distributed Roles](images/distributed-monitoring/icinga2_distributed_roles.png)
17 Rephrasing this picture into more details:
19 * A `master` node has no parent node.
20 * A `master`node is where you usually install Icinga Web 2.
21 * A `master` node can combine executed checks from child nodes into backends and notifications.
22 * A `satellite` node has a parent and a child node.
23 * A `satellite` node may execute checks on its own or delegate check execution to child nodes.
24 * A `satellite` node can receive configuration for hosts/services, etc. from the parent node.
25 * A `satellite` node continues to run even if the master node is temporarily unavailable.
26 * A `client` node only has a parent node.
27 * A `client` node will either run its own configured checks or receive command execution events from the parent node.
29 The following sections will refer to these roles and explain the
30 differences and the possibilities this kind of setup offers.
32 **Tip**: If you just want to install a single master node that monitors several hosts
33 (i.e. Icinga 2 clients), continue reading -- we'll start with
35 In case you are planning a huge cluster setup with multiple levels and
36 lots of clients, read on -- we'll deal with these cases later on.
38 The installation on each system is the same: You need to install the
39 [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and the required [plugins](02-getting-started.md#setting-up-check-plugins).
41 The required configuration steps are mostly happening
42 on the command line. You can also [automate the setup](06-distributed-monitoring.md#distributed-monitoring-automation).
44 The first thing you need learn about a distributed setup is the hierarchy of the single components.
46 ## Zones <a id="distributed-monitoring-zones"></a>
48 The Icinga 2 hierarchy consists of so-called [zone](09-object-types.md#objecttype-zone) objects.
49 Zones depend on a parent-child relationship in order to trust each other.
51 ![Icinga 2 Distributed Zones](images/distributed-monitoring/icinga2_distributed_zones.png)
53 Have a look at this example for the `satellite` zones which have the `master` zone as a parent zone:
55 object Zone "master" {
59 object Zone "satellite region 1" {
64 object Zone "satellite region 2" {
69 There are certain limitations for child zones, e.g. their members are not allowed
70 to send configuration commands to the parent zone members. Vice versa, the
71 trust hierarchy allows for example the `master` zone to send
72 configuration files to the `satellite` zone. Read more about this
73 in the [security section](06-distributed-monitoring.md#distributed-monitoring-security).
75 `client` nodes also have their own unique zone. By convention you
76 can use the FQDN for the zone name.
78 ## Endpoints <a id="distributed-monitoring-endpoints"></a>
80 Nodes which are a member of a zone are so-called [Endpoint](09-object-types.md#objecttype-endpoint) objects.
82 ![Icinga 2 Distributed Endpoints](images/distributed-monitoring/icinga2_distributed_endpoints.png)
84 Here is an example configuration for two endpoints in different zones:
86 object Endpoint "icinga2-master1.localdomain" {
87 host = "192.168.56.101"
90 object Endpoint "icinga2-satellite1.localdomain" {
91 host = "192.168.56.105"
94 object Zone "master" {
95 endpoints = [ "icinga2-master1.localdomain" ]
98 object Zone "satellite" {
99 endpoints = [ "icinga2-satellite1.localdomain" ]
103 All endpoints in the same zone work as high-availability setup. For
104 example, if you have two nodes in the `master` zone, they will load-balance the check execution.
106 Endpoint objects are important for specifying the connection
107 information, e.g. if the master should actively try to connect to a client.
109 The zone membership is defined inside the `Zone` object definition using
110 the `endpoints` attribute with an array of `Endpoint` names.
112 If you want to check the availability (e.g. ping checks) of the node
113 you still need a [Host](09-object-types.md#objecttype-host) object.
115 ## ApiListener <a id="distributed-monitoring-apilistener"></a>
117 In case you are using the CLI commands later, you don't have to write
118 this configuration from scratch in a text editor.
119 The [ApiListener](09-object-types.md#objecttype-apilistener)
120 object is used to load the SSL certificates and specify restrictions, e.g.
121 for accepting configuration commands.
123 It is also used for the [Icinga 2 REST API](12-icinga2-api.md#icinga2-api) which shares
124 the same host and port with the Icinga 2 Cluster protocol.
126 The object configuration is stored in the `/etc/icinga2/features-enabled/api.conf`
127 file. Depending on the configuration mode the attributes `accept_commands`
128 and `accept_config` can be configured here.
130 In order to use the `api` feature you need to enable it and restart Icinga 2.
132 icinga2 feature enable api
134 ## Conventions <a id="distributed-monitoring-conventions"></a>
136 By convention all nodes should be configured using their FQDN.
138 Furthermore, you must ensure that the following names
139 are exactly the same in all configuration files:
141 * Host certificate common name (CN).
142 * Endpoint configuration object for the host.
143 * NodeName constant for the local host.
145 Setting this up on the command line will help you to minimize the effort.
146 Just keep in mind that you need to use the FQDN for endpoints and for
147 common names when asked.
149 ## Security <a id="distributed-monitoring-security"></a>
151 While there are certain mechanisms to ensure a secure communication between all
152 nodes (firewalls, policies, software hardening, etc.), Icinga 2 also provides
155 * SSL certificates are mandatory for communication between nodes. The CLI commands
156 help you create those certificates.
157 * Child zones only receive updates (check results, commands, etc.) for their configured objects.
158 * Child zones are not allowed to push configuration updates to parent zones.
159 * Zones cannot interfere with other zones and influence each other. Each checkable host or service object is assigned to **one zone** only.
160 * All nodes in a zone trust each other.
161 * [Config sync](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync) and [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) is disabled by default.
163 The underlying protocol uses JSON-RPC event notifications exchanged by nodes.
164 The connection is secured by TLS. The message protocol uses an internal API,
165 and as such message types and names may change internally and are not documented.
167 Zones build the trust relationship in a distributed environment. If you do not specify
168 a zone for a client and specify the parent zone, its zone members e.g. the master instance
169 won't trust the client.
171 Building this trust is key in your distributed environment. That way the parent node
172 knows that it is able to send messages to the child zone, e.g. configuration objects,
173 configuration in global zones, commands to be executed in this zone/for this endpoint.
174 It also receives check results from the child zone for checkable objects (host/service).
176 Vice versa, the client trusts the master and accepts configuration and commands if enabled
177 in the api feature. If the client would send configuration to the parent zone, the parent nodes
178 will deny it. The parent zone is the configuration entity, and does not trust clients in this matter.
179 A client could attempt to modify a different client for example, or inject a check command
182 While it may sound complicated for client setups, it removes the problem with different roles
183 and configurations for a master and a client. Both of them work the same way, are configured
184 in the same way (Zone, Endpoint, ApiListener), and you can troubleshoot and debug them in just one go.
186 ## Master Setup <a id="distributed-monitoring-setup-master"></a>
188 This section explains how to install a central single master node using
189 the `node wizard` command. If you prefer to do an automated installation, please
190 refer to the [automated setup](06-distributed-monitoring.md#distributed-monitoring-automation) section.
192 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
193 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
196 **Note**: Windows is not supported for a master node setup.
198 The next step is to run the `node wizard` CLI command. Prior to that
199 ensure to collect the required information:
201 Parameter | Description
202 --------------------|--------------------
203 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
204 Master zone name | **Optional.** Allows to specify the master zone name. Defaults to `master`.
205 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
206 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
207 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
208 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `y`. Configuration on the master is discussed below.
210 The setup wizard will ensure that the following steps are taken:
212 * Enable the `api` feature.
213 * Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if it doesn't exist.
214 * Create a certificate for this node signed by the CA key.
215 * Update the [zones.conf](04-configuring-icinga-2.md#zones-conf) file with the new zone hierarchy.
216 * Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuring-icinga-2.md#constants-conf) configuration.
217 * Update the [icinga2.conf](04-configuring-icinga-2.md#icinga2-conf) to disable the `conf.d` inclusion, and add the `api-users.conf` file inclusion.
219 Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
222 [root@icinga2-master1.localdomain /]# icinga2 node wizard
224 Welcome to the Icinga 2 Setup Wizard!
226 We will guide you through all required configuration details.
228 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]: n
230 Starting the Master setup routine...
232 Please specify the common name (CN) [icinga2-master1.localdomain]: icinga2-master1.localdomain
233 Reconfiguring Icinga...
234 Checking for existing certificates for common name 'icinga2-master1.localdomain'...
235 Certificates not yet generated. Running 'api setup' now.
236 Generating master configuration for Icinga 2.
237 Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
239 Master zone name [master]:
241 Default global zones: global-templates director-global
242 Do you want to specify additional global zones? [y/N]: N
244 Please specify the API bind host/port (optional):
248 Do you want to disable the inclusion of the conf.d directory [Y/n]:
249 Disabling the inclusion of the conf.d directory...
250 Checking if the api-users.conf file exists...
254 Now restart your Icinga 2 daemon to finish the installation!
257 You can verify that the CA public and private keys are stored in the `/var/lib/icinga2/ca` directory.
258 Keep this path secure and include it in your [backups](02-getting-started.md#install-backup).
260 In case you lose the CA private key you have to generate a new CA for signing new client
261 certificate requests. You then have to also re-create new signed certificates for all
264 Once the master setup is complete, you can also use this node as primary [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing)
265 master. The following section will explain how to use the CLI commands in order to fetch their
266 signed certificate from this master node.
268 ## Signing Certificates on the Master <a id="distributed-monitoring-setup-sign-certificates-master"></a>
270 All certificates must be signed by the same certificate authority (CA). This ensures
271 that all nodes trust each other in a distributed monitoring environment.
273 This CA is generated during the [master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master)
274 and should be the same on all master instances.
276 You can avoid signing and deploying certificates [manually](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
277 by using built-in methods for auto-signing certificate signing requests (CSR):
279 * [CSR Auto-Signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) which uses a client ticket generated on the master as trust identifier.
280 * [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) which allows to sign pending certificate requests on the master.
282 Both methods are described in detail below.
286 > [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) is available in Icinga 2 v2.8+.
288 ### CSR Auto-Signing <a id="distributed-monitoring-setup-csr-auto-signing"></a>
290 A client which sends a certificate signing request (CSR) must authenticate itself
291 in a trusted way. The master generates a client ticket which is included in this request.
292 That way the master can verify that the request matches the previously trusted ticket
293 and sign the request.
297 > Icinga 2 v2.8 adds the possibility to forward signing requests on a satellite
298 > to the master node. This helps with the setup of [three level clusters](#06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
303 * Nodes can be installed by different users who have received the client ticket.
304 * No manual interaction necessary on the master node.
305 * Automation tools like Puppet, Ansible, etc. can retrieve the pre-generated ticket in their client catalog
306 and run the node setup directly.
310 * Tickets need to be generated on the master and copied to client setup wizards.
311 * No central signing management.
314 Setup wizards for satellite/client nodes will ask you for this specific client ticket.
316 There are two possible ways to retrieve the ticket:
318 * [CLI command](11-cli-commands.md#cli-command-pki) executed on the master node.
319 * [REST API](12-icinga2-api.md#icinga2-api) request against the master node.
321 Required information:
323 Parameter | Description
324 --------------------|--------------------
325 Common name (CN) | **Required.** The common name for the satellite/client. By convention this should be the FQDN.
327 The following example shows how to generate a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
329 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
331 Querying the [Icinga 2 API](12-icinga2-api.md#icinga2-api) on the master requires an [ApiUser](12-icinga2-api.md#icinga2-api-authentication)
332 object with at least the `actions/generate-ticket` permission.
334 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/conf.d/api-users.conf
336 object ApiUser "client-pki-ticket" {
337 password = "bea11beb7b810ea9ce6ea" //change this
338 permissions = [ "actions/generate-ticket" ]
341 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
343 Retrieve the ticket on the master node `icinga2-master1.localdomain` with `curl`, for example:
345 [root@icinga2-master1.localdomain /]# curl -k -s -u client-pki-ticket:bea11beb7b810ea9ce6ea -H 'Accept: application/json' \
346 -X POST 'https://localhost:5665/v1/actions/generate-ticket' -d '{ "cn": "icinga2-client1.localdomain" }'
348 Store that ticket number for the satellite/client setup below.
352 > Never expose the ticket salt and/or ApiUser credentials to your client nodes.
353 > Example: Retrieve the ticket on the Puppet master node and send the compiled catalog
354 > to the authorized Puppet agent node which will invoke the
355 > [automated setup steps](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup).
357 ### On-Demand CSR Signing <a id="distributed-monitoring-setup-on-demand-csr-signing"></a>
359 Icinga 2 v2.8 adds the possibility to sign certificates from clients without
360 requiring a client ticket for auto-signing.
362 Instead, the client sends a certificate signing request to specified parent node.
363 This could either be directly the master, or a satellite which forwards the request
364 to the signing master.
368 * Central certificate request signing management.
369 * No pre-generated ticket is required for client setups.
373 * Asynchronous step for automated deployments.
374 * Needs client verification on the master.
377 You can list certificate requests by using the `ca list` CLI command. This also shows
378 which requests already have been signed.
381 [root@icinga2-master1.localdomain /]# icinga2 ca list
382 Fingerprint | Timestamp | Signed | Subject
383 -----------------------------------------------------------------|---------------------|--------|--------
384 403da5b228df384f07f980f45ba50202529cded7c8182abf96740660caa09727 | 2017/09/06 17:02:40 | * | CN = icinga2-client1.localdomain
385 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
388 **Tip**: Add `--json` to the CLI command to retrieve the details in JSON format.
390 If you want to sign a specific request, you need to use the `ca sign` CLI command
391 and pass its fingerprint as argument.
394 [root@icinga2-master1.localdomain /]# icinga2 ca sign 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850
395 information/cli: Signed certificate for 'CN = icinga2-client2.localdomain'.
398 ## Client/Satellite Setup <a id="distributed-monitoring-setup-satellite-client"></a>
400 This section describes the setup of a satellite and/or client connected to an
401 existing master node setup. If you haven't done so already, please [run the master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master).
403 Icinga 2 on the master node must be running and accepting connections on port `5665`.
406 ### Client/Satellite Setup on Linux <a id="distributed-monitoring-setup-client-linux"></a>
408 Please ensure that you've run all the steps mentioned in the [client/satellite section](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
410 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
411 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
414 The next step is to run the `node wizard` CLI command.
416 In this example we're generating a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
418 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
419 4f75d2ecd253575fe9180938ebff7cbca262f96e
421 Note: You don't need this step if you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
423 Start the wizard on the client `icinga2-client1.localdomain`:
426 [root@icinga2-client1.localdomain /]# icinga2 node wizard
428 Welcome to the Icinga 2 Setup Wizard!
430 We will guide you through all required configuration details.
433 Press `Enter` or add `y` to start a satellite or client setup.
436 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]:
439 Press `Enter` to use the proposed name in brackets, or add a specific common name (CN). By convention
440 this should be the FQDN.
443 Starting the Client/Satellite setup routine...
445 Please specify the common name (CN) [icinga2-client1.localdomain]: icinga2-client1.localdomain
448 Specify the direct parent for this node. This could be your primary master `icinga2-master1.localdomain`
449 or a satellite node in a multi level cluster scenario.
452 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
453 Master/Satellite Common Name (CN from your master/satellite node): icinga2-master1.localdomain
456 Press `Enter` or choose `y` to establish a connection to the parent node.
459 Do you want to establish a connection to the parent node from this node? [Y/n]:
464 > If this node cannot connect to the parent node, choose `n`. The setup
465 > wizard will provide instructions for this scenario -- signing questions are disabled then.
467 Add the connection details for `icinga2-master1.localdomain`.
470 Please specify the master/satellite connection information:
471 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.101
472 Master/Satellite endpoint port [5665]: 5665
475 You can add more parent nodes if necessary. Press `Enter` or choose `n`
476 if you don't want to add any. This comes in handy if you have more than one
477 parent node, e.g. two masters or two satellites.
480 Add more master/satellite endpoints? [y/N]:
483 Verify the parent node's certificate:
486 Parent certificate information:
488 Subject: CN = icinga2-master1.localdomain
489 Issuer: CN = Icinga CA
490 Valid From: Sep 7 13:41:24 2017 GMT
491 Valid Until: Sep 3 13:41:24 2032 GMT
492 Fingerprint: AC 99 8B 2B 3D B0 01 00 E5 21 FA 05 2E EC D5 A9 EF 9E AA E3
494 Is this information correct? [y/N]: y
497 The setup wizard fetches the parent node's certificate and ask
498 you to verify this information. This is to prevent MITM attacks or
499 any kind of untrusted parent relationship.
501 Note: The certificate is not fetched if you have chosen not to connect
504 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
507 Please specify the request ticket generated on your Icinga 2 master (optional).
508 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
509 4f75d2ecd253575fe9180938ebff7cbca262f96e
512 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
513 you can leave the ticket question blank.
515 Instead, Icinga 2 tells you to approve the request later on the master node.
518 No ticket was specified. Please approve the certificate signing request manually
519 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
522 You can optionally specify a different bind host and/or port.
525 Please specify the API bind host/port (optional):
530 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
531 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
534 Accept config from parent node? [y/N]: y
535 Accept commands from parent node? [y/N]: y
538 Next you can optionally specify the local and parent zone names. This will be reflected
539 in the generated zone configuration file.
541 Set the local zone name to something else, if you are installing a satellite or secondary master instance.
544 Local zone name [icinga2-client1.localdomain]:
547 Set the parent zone name to something else than `master` if this client connects to a satellite instance instead of the master.
550 Parent zone name [master]:
553 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
554 Press `Enter` or choose `n`, if you don't want to add any additional.
557 Reconfiguring Icinga...
559 Default global zones: global-templates director-global
560 Do you want to specify additional global zones? [y/N]: N
563 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
564 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
565 they receive configuration synced from the parent zone.
568 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
569 Disabling the inclusion of the conf.d directory...
573 The wizard proceeds and you are good to go.
578 Now restart your Icinga 2 daemon to finish the installation!
583 > If you have chosen not to connect to the parent node, you cannot start
584 > Icinga 2 yet. The wizard asked you to manually copy the master's public
585 > CA certificate file into `/var/lib/icinga2/certs/ca.crt`.
587 > You need to manually sign the CSR on the master node.
589 Restart Icinga 2 as requested.
592 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
595 Here is an overview of all parameters in detail:
597 Parameter | Description
598 --------------------|--------------------
599 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
600 Master common name | **Required.** Use the common name you've specified for your master node before.
601 Establish connection to the parent node | **Optional.** Whether the node should attempt to connect to the parent node or not. Defaults to `y`.
602 Master/Satellite endpoint host | **Required if the the client needs to connect to the master/satellite.** The parent endpoint's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
603 Master/Satellite endpoint port | **Optional if the the client needs to connect to the master/satellite.** The parent endpoints's listening port. This information is included in the `Endpoint` object configuration.
604 Add more master/satellite endpoints | **Optional.** If you have multiple master/satellite nodes configured, add them here.
605 Parent Certificate information | **Required.** Verify that the connecting host really is the requested master node.
606 Request ticket | **Optional.** Add the [ticket](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) generated on the master.
607 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
608 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
609 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
610 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
611 Local zone name | **Optional.** Allows to specify the name for the local zone. This comes in handy when this instance is a satellite, not a client. Defaults to the FQDN.
612 Parent zone name | **Optional.** Allows to specify the name for the parent zone. This is important if the client has a satellite instance as parent, not the master. Defaults to `master`.
613 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
614 Disable conf.d | **Optional.** Allows to disable the inclusion of the `conf.d` directory which holds local example configuration. Clients should retrieve their configuration from the parent node, or act as command endpoint execution bridge. Defaults to `y`.
616 The setup wizard will ensure that the following steps are taken:
618 * Enable the `api` feature.
619 * Create a certificate signing request (CSR) for the local node.
620 * Request a signed certificate i(optional with the provided ticket number) on the master node.
621 * Allow to verify the parent node's certificate.
622 * Store the signed client certificate and ca.crt in `/var/lib/icinga2/certs`.
623 * Update the `zones.conf` file with the new zone hierarchy.
624 * Update `/etc/icinga2/features-enabled/api.conf` (`accept_config`, `accept_commands`) and `constants.conf`.
625 * Update `/etc/icinga2/icinga2.conf` and comment out `include_recursive "conf.d"`.
627 You can verify that the certificate files are stored in the `/var/lib/icinga2/certs` directory.
631 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
636 > If the client is not directly connected to the certificate signing master,
637 > signing requests and responses might need some minutes to fully update the client certificates.
639 > If you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
640 > certificates need to be signed on the master first. Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
642 Now that you've successfully installed a Linux/Unix satellite/client instance, please proceed to
643 the [configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
647 ### Client Setup on Windows <a id="distributed-monitoring-setup-client-windows"></a>
649 Download the MSI-Installer package from [https://packages.icinga.com/windows/](https://packages.icinga.com/windows/).
653 * Windows Vista/Server 2008 or higher
654 * Versions older than Windows 10/Server 2016 require the [Universal C Runtime for Windows](https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows)
655 * [Microsoft .NET Framework 2.0](https://www.microsoft.com/de-de/download/details.aspx?id=1639) for the setup wizard
657 The installer package includes the [NSClient++](https://www.nsclient.org/) package
658 so that Icinga 2 can use its built-in plugins. You can find more details in
659 [this chapter](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
660 The Windows package also installs native [monitoring plugin binaries](06-distributed-monitoring.md#distributed-monitoring-windows-plugins)
661 to get you started more easily.
665 > Please note that Icinga 2 was designed to run as light-weight client on Windows.
666 > There is no support for satellite instances.
668 #### Windows Client Setup Start <a id="distributed-monitoring-setup-client-windows-start"></a>
670 Run the MSI-Installer package and follow the instructions shown in the screenshots.
672 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_01.png)
673 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_02.png)
674 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_03.png)
675 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_04.png)
676 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_05.png)
678 The graphical installer offers to run the Icinga 2 setup wizard after the installation. Select
679 the check box to proceed.
683 > You can also run the Icinga 2 setup wizard from the Start menu later.
685 On a fresh installation the setup wizard guides you through the initial configuration.
686 It also provides a mechanism to send a certificate request to the [CSR signing master](distributed-monitoring-setup-sign-certificates-master).
688 The following configuration details are required:
690 Parameter | Description
691 --------------------|--------------------
692 Instance name | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
693 Setup ticket | **Optional.** Paste the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing). If left blank, the certificate request must be [signed on the master node](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
695 Fill in the required information and click `Add` to add a new master connection.
697 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_01.png)
699 Add the following details:
701 Parameter | Description
702 -------------------------------|-------------------------------
703 Instance name | **Required.** The master/satellite endpoint name where this client is a direct child of.
704 Master/Satellite endpoint host | **Required.** The master or satellite's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
705 Master/Satellite endpoint port | **Optional.** The master or satellite's listening port. This information is included in the `Endpoint` object configuration.
707 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02.png)
709 When needed you can add an additional global zone (the zones `global-templates` and `director-global` are added by default):
711 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02_global_zone.png)
713 Optionally enable the following settings:
715 Parameter | Description
716 ----------------------------------|----------------------------------
717 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
718 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
719 Run Icinga 2 service as this user | **Optional.** Specify a different Windows user. This defaults to `NT AUTHORITY\Network Service` and is required for more privileged service checks.
720 Install NSClient++ | **Optional.** The Windows installer bundles the NSClient++ installer for additional [plugin checks](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
722 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_03.png)
724 Verify the certificate from the master/satellite instance where this node should connect to.
726 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_04.png)
729 #### Bundled NSClient++ Setup <a id="distributed-monitoring-setup-client-windows-nsclient"></a>
731 If you have chosen to install/update the NSClient++ package, the Icinga 2 setup wizard asks
734 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_01.png)
736 Choose the `Generic` setup.
738 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_02.png)
740 Choose the `Custom` setup type.
742 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_03.png)
744 NSClient++ does not install a sample configuration by default. Change this as shown in the screenshot.
746 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_04.png)
748 Generate a secure password and enable the web server module. **Note**: The webserver module is
749 available starting with NSClient++ 0.5.0. Icinga 2 v2.6+ is required which includes this version.
751 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_05.png)
753 Finish the installation.
755 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_06.png)
757 Open a web browser and navigate to `https://localhost:8443`. Enter the password you've configured
758 during the setup. In case you lost it, look into the `C:\Program Files\NSClient++\nsclient.ini`
761 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_07.png)
763 The NSClient++ REST API can be used to query metrics. [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
764 uses this transport method.
767 #### Finish Windows Client Setup <a id="distributed-monitoring-setup-client-windows-finish"></a>
769 Finish the Windows setup wizard.
771 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_with_ticket.png)
773 If you did not provide a setup ticket, you need to sign the certificate request on the master.
774 The setup wizards tells you to do so. The Icinga 2 service is running at this point already
775 and will automatically receive and update a signed client certificate.
779 > Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
782 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_no_ticket.png)
784 Icinga 2 is automatically started as a Windows service.
786 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_running_service.png)
788 The Icinga 2 configuration is stored inside the `C:\ProgramData\icinga2` directory.
789 Click `Examine Config` in the setup wizard to open a new Explorer window.
791 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_examine_config.png)
793 The configuration files can be modified with your favorite editor e.g. Notepad.
795 In order to use the [top down](06-distributed-monitoring.md#distributed-monitoring-top-down) client
796 configuration prepare the following steps.
798 Add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
799 for syncing check commands later. Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
800 the `zones.conf` file in your preferred editor. Add the following lines if not existing already:
803 object Zone "global-templates" {
810 > Packages >= 2.8 provide this configuration by default.
812 You don't need any local configuration on the client except for
813 CheckCommand definitions which can be synced using the global zone
814 above. Therefore disable the inclusion of the `conf.d` directory
815 in the `icinga2.conf` file.
816 Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
817 the `icinga2.conf` file in your preferred editor. Remove or comment (`//`)
821 // Commented out, not required on a client with top down mode
822 //include_recursive "conf.d"
827 > Packages >= 2.9 provide an option in the setup wizard to disable this.
828 > Defaults to disabled.
830 Validate the configuration on Windows open an administrator terminal
831 and run the following command:
834 C:\WINDOWS\system32>cd "C:\Program Files\ICINGA2\sbin"
835 C:\Program Files\ICINGA2\sbin>icinga2.exe daemon -C
838 **Note**: You have to run this command in a shell with `administrator` privileges.
840 Now you need to restart the Icinga 2 service. Run `services.msc` from the start menu
841 and restart the `icinga2` service. Alternatively, you can use the `net {start,stop}` CLI commands.
843 ![Icinga 2 Windows Service Start/Stop](images/distributed-monitoring/icinga2_windows_cmd_admin_net_start_stop.png)
845 Now that you've successfully installed a Windows client, please proceed to
846 the [detailed configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
850 > The certificate location changed in v2.8 to `%ProgramData%\var\lib\icinga2\certs`.
851 > Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
854 ## Configuration Modes <a id="distributed-monitoring-configuration-modes"></a>
856 There are different ways to ensure that the Icinga 2 cluster nodes execute
857 checks, send notifications, etc.
859 The preferred method is to configure monitoring objects on the master
860 and distribute the configuration to satellites and clients.
862 The following chapters will explain this in detail with hands-on manual configuration
863 examples. You should test and implement this once to fully understand how it works.
865 Once you are familiar with Icinga 2 and distributed monitoring, you
866 can start with additional integrations to manage and deploy your
869 * [Icinga Director](https://github.com/icinga/icingaweb2-module-director) provides a web interface to manage configuration and also allows to sync imported resources (CMDB, PuppetDB, etc.)
870 * [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
871 * [Puppet Module](https://github.com/Icinga/puppet-icinga2)
872 * [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
874 More details can be found [here](13-addons.md#configuration-tools).
876 ### Top Down <a id="distributed-monitoring-top-down"></a>
878 There are two different behaviors with check execution:
880 * Send a command execution event remotely: The scheduler still runs on the parent node.
881 * Sync the host/service objects directly to the child node: Checks are executed locally.
883 Again, technically it does not matter whether this is a `client` or a `satellite`
884 which is receiving configuration or command execution events.
886 ### Top Down Command Endpoint <a id="distributed-monitoring-top-down-command-endpoint"></a>
888 This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
889 The host/service object configuration is located on the master/satellite and the client only
890 needs the CheckCommand object definitions being used there.
892 Every endpoint has its own remote check queue. The amount of checks executed simultaneously
893 can be limited on the endpoint with the `MaxConcurrentChecks` constant defined in [constants.conf](04-configuring-icinga-2.md#constants-conf). Icinga 2 may discard check requests,
894 if the remote check queue is full.
896 ![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
900 * No local checks need to be defined on the child node (client).
901 * Light-weight remote check execution (asynchronous events).
902 * No [replay log](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-command-endpoint-log-duration) is necessary for the child node.
903 * Pin checks to specific endpoints (if the child zone consists of 2 endpoints).
907 * If the child node is not connected, no more checks are executed.
908 * Requires additional configuration attribute specified in host/service objects.
909 * Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
911 To make sure that all nodes involved will accept configuration and/or
912 commands, you need to configure the `Zone` and `Endpoint` hierarchy
915 * `icinga2-master1.localdomain` is the configuration master in this scenario.
916 * `icinga2-client1.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition, it receives the global check command configuration from the master.
918 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
920 The endpoint configuration could look like this, for example:
922 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
924 object Endpoint "icinga2-master1.localdomain" {
925 host = "192.168.56.101"
928 object Endpoint "icinga2-client1.localdomain" {
929 host = "192.168.56.111"
932 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
934 **Note**: Each client requires its own zone and endpoint configuration. Best practice
935 is to use the client's FQDN for all object names.
937 The `master` zone is a parent of the `icinga2-client1.localdomain` zone:
939 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
941 object Zone "master" {
942 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
945 object Zone "icinga2-client1.localdomain" {
946 endpoints = [ "icinga2-client1.localdomain" ]
948 parent = "master" //establish zone hierarchy
951 In addition, add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
952 for syncing check commands later:
955 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
957 object Zone "global-templates" {
964 > Packages >= 2.8 provide this configuration by default.
966 You don't need any local configuration on the client except for
967 CheckCommand definitions which can be synced using the global zone
968 above. Therefore disable the inclusion of the `conf.d` directory
969 in `/etc/icinga2/icinga2.conf`.
972 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/icinga2.conf
974 // Commented out, not required on a client as command endpoint
975 //include_recursive "conf.d"
980 > Packages >= 2.9 provide an option in the setup wizard to disable this.
981 > Defaults to disabled.
983 Edit the `api` feature on the client `icinga2-client1.localdomain` in
984 the `/etc/icinga2/features-enabled/api.conf` file and make sure to set
985 `accept_commands` and `accept_config` to `true`:
987 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
989 object ApiListener "api" {
991 accept_commands = true
995 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1000 [root@icinga2-client1.localdomain /]# icinga2 daemon -C
1001 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
1003 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1004 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1006 Once the clients have successfully connected, you are ready for the next step: **execute
1007 a remote check on the client using the command endpoint**.
1009 Include the host and service object configuration in the `master` zone
1010 -- this will help adding a secondary master for high-availability later.
1012 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1014 Add the host and service objects you want to monitor. There is
1015 no limitation for files and directories -- best practice is to
1016 sort things by type.
1018 By convention a master/satellite/client host object should use the same name as the endpoint object.
1019 You can also add multiple hosts which execute checks against remote services/clients.
1021 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1022 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1024 object Host "icinga2-client1.localdomain" {
1025 check_command = "hostalive" //check is executed on the master
1026 address = "192.168.56.111"
1028 vars.client_endpoint = name //follows the convention that host name == endpoint name
1031 Given that you are monitoring a Linux client, we'll add a remote [disk](10-icinga-template-library.md#plugin-check-command-disk)
1034 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1036 apply Service "disk" {
1037 check_command = "disk"
1039 //specify where the check is executed
1040 command_endpoint = host.vars.client_endpoint
1042 assign where host.vars.client_endpoint
1045 If you have your own custom `CheckCommand` definition, add it to the global zone:
1047 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1048 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1050 object CheckCommand "my-cmd" {
1054 Save the changes and validate the configuration on the master node:
1056 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1058 Restart the Icinga 2 daemon (example for CentOS 7):
1060 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1062 The following steps will happen:
1064 * Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
1065 * The `icinga2-master1.localdomain` node schedules and executes the checks.
1066 * The `icinga2-client1.localdomain` node receives the execute command event with additional command parameters.
1067 * The `icinga2-client1.localdomain` node maps the command parameters to the local check command, executes the check locally, and sends back the check result message.
1069 As you can see, no interaction from your side is required on the client itself, and it's not necessary to reload the Icinga 2 service on the client.
1071 You have learned the basics about command endpoint checks. Proceed with
1072 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1073 section where you can find detailed information on extending the setup.
1076 ### Top Down Config Sync <a id="distributed-monitoring-top-down-config-sync"></a>
1078 This mode syncs the object configuration files within specified zones.
1079 It comes in handy if you want to configure everything on the master node
1080 and sync the satellite checks (disk, memory, etc.). The satellites run their
1081 own local scheduler and will send the check result messages back to the master.
1083 ![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
1087 * Sync the configuration files from the parent zone to the child zones.
1088 * No manual restart is required on the child nodes, as syncing, validation, and restarts happen automatically.
1089 * Execute checks directly on the child node's scheduler.
1090 * Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
1091 * Use a global zone for syncing templates, groups, etc.
1095 * Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
1096 * Additional zone and endpoint configuration needed.
1097 * Replay log is replicated on reconnect after connection loss. This might increase the data transfer and create an overload on the connection.
1099 To make sure that all involved nodes accept configuration and/or
1100 commands, you need to configure the `Zone` and `Endpoint` hierarchy
1103 * `icinga2-master1.localdomain` is the configuration master in this scenario.
1104 * `icinga2-client2.localdomain` acts as client which receives configuration from the master. Checks are scheduled locally.
1106 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
1108 The endpoint configuration could look like this:
1110 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1112 object Endpoint "icinga2-master1.localdomain" {
1113 host = "192.168.56.101"
1116 object Endpoint "icinga2-client2.localdomain" {
1117 host = "192.168.56.112"
1120 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
1122 **Note**: Each client requires its own zone and endpoint configuration. Best practice
1123 is to use the client's FQDN for all object names.
1125 The `master` zone is a parent of the `icinga2-client2.localdomain` zone:
1127 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1129 object Zone "master" {
1130 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
1133 object Zone "icinga2-client2.localdomain" {
1134 endpoints = [ "icinga2-client2.localdomain" ]
1136 parent = "master" //establish zone hierarchy
1139 Edit the `api` feature on the client `icinga2-client2.localdomain` in
1140 the `/etc/icinga2/features-enabled/api.conf` file and set
1141 `accept_config` to `true`.
1143 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
1145 object ApiListener "api" {
1147 accept_config = true
1150 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1153 Example on CentOS 7:
1155 [root@icinga2-client2.localdomain /]# icinga2 daemon -C
1156 [root@icinga2-client2.localdomain /]# systemctl restart icinga2
1158 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1159 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1162 **Tip**: Best practice is to use a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
1163 for common configuration items (check commands, templates, groups, etc.).
1165 Once the clients have connected successfully, it's time for the next step: **execute
1166 a local check on the client using the configuration sync**.
1168 Navigate to `/etc/icinga2/zones.d` on your master node
1169 `icinga2-master1.localdomain` and create a new directory with the same
1170 name as your satellite/client zone name:
1172 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/icinga2-client2.localdomain
1174 Add the host and service objects you want to monitor. There is
1175 no limitation for files and directories -- best practice is to
1176 sort things by type.
1178 By convention a master/satellite/client host object should use the same name as the endpoint object.
1179 You can also add multiple hosts which execute checks against remote services/clients.
1181 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/icinga2-client2.localdomain
1182 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim hosts.conf
1184 object Host "icinga2-client2.localdomain" {
1185 check_command = "hostalive"
1186 address = "192.168.56.112"
1187 zone = "master" //optional trick: sync the required host object to the client, but enforce the "master" zone to execute the check
1190 Given that you are monitoring a Linux client we'll just add a local [disk](10-icinga-template-library.md#plugin-check-command-disk)
1193 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim services.conf
1195 object Service "disk" {
1196 host_name = "icinga2-client2.localdomain"
1198 check_command = "disk"
1201 Save the changes and validate the configuration on the master node:
1203 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1205 Restart the Icinga 2 daemon (example for CentOS 7):
1207 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1209 The following steps will happen:
1211 * Icinga 2 validates the configuration on `icinga2-master1.localdomain`.
1212 * Icinga 2 copies the configuration into its zone config store in `/var/lib/icinga2/api/zones`.
1213 * The `icinga2-master1.localdomain` node sends a config update event to all endpoints in the same or direct child zones.
1214 * The `icinga2-client2.localdomain` node accepts config and populates the local zone config store with the received config files.
1215 * The `icinga2-client2.localdomain` node validates the configuration and automatically restarts.
1217 Again, there is no interaction required on the client
1220 You can also use the config sync inside a high-availability zone to
1221 ensure that all config objects are synced among zone members.
1223 **Note**: You can only have one so-called "config master" in a zone which stores
1224 the configuration in the `zones.d` directory.
1225 Multiple nodes with configuration files in the `zones.d` directory are
1228 Now that you've learned the basics about the configuration sync, proceed with
1229 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1230 section where you can find detailed information on extending the setup.
1234 If you are eager to start fresh instead you might take a look into the
1235 [Icinga Director](https://github.com/icinga/icingaweb2-module-director).
1237 ## Scenarios <a id="distributed-monitoring-scenarios"></a>
1239 The following examples should give you an idea on how to build your own
1240 distributed monitoring environment. We've seen them all in production
1241 environments and received feedback from our [community](https://www.icinga.com/community/get-involved/)
1242 and [partner support](https://www.icinga.com/services/support/) channels:
1244 * Single master with clients.
1245 * HA master with clients as command endpoint.
1246 * Three level cluster with config HA masters, satellites receiving config sync, and clients checked using command endpoint.
1248 ### Master with Clients <a id="distributed-monitoring-master-clients"></a>
1250 ![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
1252 * `icinga2-master1.localdomain` is the primary master node.
1253 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1257 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1258 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1260 Edit the `zones.conf` configuration file on the master:
1262 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1264 object Endpoint "icinga2-master1.localdomain" {
1267 object Endpoint "icinga2-client1.localdomain" {
1268 host = "192.168.56.111" //the master actively tries to connect to the client
1271 object Endpoint "icinga2-client2.localdomain" {
1272 host = "192.168.56.112" //the master actively tries to connect to the client
1275 object Zone "master" {
1276 endpoints = [ "icinga2-master1.localdomain" ]
1279 object Zone "icinga2-client1.localdomain" {
1280 endpoints = [ "icinga2-client1.localdomain" ]
1285 object Zone "icinga2-client2.localdomain" {
1286 endpoints = [ "icinga2-client2.localdomain" ]
1291 /* sync global commands */
1292 object Zone "global-templates" {
1296 The two client nodes do not necessarily need to know about each other. The only important thing
1297 is that they know about the parent zone and their endpoint members (and optionally the global zone).
1299 If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object,
1300 the client will actively try to connect to the master node. Since we've specified the client
1301 endpoint's attribute on the master node already, we don't want the clients to connect to the
1302 master. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1304 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1306 object Endpoint "icinga2-master1.localdomain" {
1307 //do not actively connect to the master by leaving out the 'host' attribute
1310 object Endpoint "icinga2-client1.localdomain" {
1313 object Zone "master" {
1314 endpoints = [ "icinga2-master1.localdomain" ]
1317 object Zone "icinga2-client1.localdomain" {
1318 endpoints = [ "icinga2-client1.localdomain" ]
1323 /* sync global commands */
1324 object Zone "global-templates" {
1328 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1330 object Endpoint "icinga2-master1.localdomain" {
1331 //do not actively connect to the master by leaving out the 'host' attribute
1334 object Endpoint "icinga2-client2.localdomain" {
1337 object Zone "master" {
1338 endpoints = [ "icinga2-master1.localdomain" ]
1341 object Zone "icinga2-client2.localdomain" {
1342 endpoints = [ "icinga2-client2.localdomain" ]
1347 /* sync global commands */
1348 object Zone "global-templates" {
1352 Now it is time to define the two client hosts and apply service checks using
1353 the command endpoint execution method on them. Note: You can also use the
1354 config sync mode here.
1356 Create a new configuration directory on the master node:
1358 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1360 Add the two client nodes as host objects:
1362 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1363 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1365 object Host "icinga2-client1.localdomain" {
1366 check_command = "hostalive"
1367 address = "192.168.56.111"
1368 vars.client_endpoint = name //follows the convention that host name == endpoint name
1371 object Host "icinga2-client2.localdomain" {
1372 check_command = "hostalive"
1373 address = "192.168.56.112"
1374 vars.client_endpoint = name //follows the convention that host name == endpoint name
1377 Add services using command endpoint checks:
1379 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1381 apply Service "ping4" {
1382 check_command = "ping4"
1383 //check is executed on the master node
1384 assign where host.address
1387 apply Service "disk" {
1388 check_command = "disk"
1390 //specify where the check is executed
1391 command_endpoint = host.vars.client_endpoint
1393 assign where host.vars.client_endpoint
1396 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1398 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1399 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1401 Open Icinga Web 2 and check the two newly created client hosts with two new services
1402 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1404 ### High-Availability Master with Clients <a id="distributed-monitoring-scenarios-ha-master-clients"></a>
1406 ![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
1408 This scenario is similar to the one in the [previous section](06-distributed-monitoring.md#distributed-monitoring-master-clients). The only difference is that we will now set up two master nodes in a high-availability setup.
1409 These nodes must be configured as zone and endpoints objects.
1411 The setup uses the capabilities of the Icinga 2 cluster. All zone members
1412 replicate cluster events amongst each other. In addition to that, several Icinga 2
1413 features can enable HA functionality.
1415 **Note**: All nodes in the same zone require that you enable the same features for high-availability (HA).
1419 * `icinga2-master1.localdomain` is the config master master node.
1420 * `icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`.
1421 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1425 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1426 * Set up `icinga2-master2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1427 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, set to `y` and add the secondary master `icinga2-master2.localdomain`).
1429 In case you don't want to use the CLI commands, you can also manually create and sync the
1430 required SSL certificates. We will modify and discuss all the details of the automatically generated configuration here.
1432 Since there are now two nodes in the same zone, we must consider the
1433 [high-availability features](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1435 * Checks and notifications are balanced between the two master nodes. That's fine, but it requires check plugins and notification scripts to exist on both nodes.
1436 * The IDO feature will only be active on one node by default. Since all events are replicated between both nodes, it is easier to just have one central database.
1438 One possibility is to use a dedicated MySQL cluster VIP (external application cluster)
1439 and leave the IDO feature with enabled HA capabilities. Alternatively,
1440 you can disable the HA feature and write to a local database on each node.
1441 Both methods require that you configure Icinga Web 2 accordingly (monitoring
1442 backend, IDO database, used transports, etc.).
1444 The zone hierarchy could look like this. It involves putting the two master nodes
1445 `icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
1447 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1449 object Endpoint "icinga2-master1.localdomain" {
1450 host = "192.168.56.101"
1453 object Endpoint "icinga2-master2.localdomain" {
1454 host = "192.168.56.102"
1457 object Endpoint "icinga2-client1.localdomain" {
1458 host = "192.168.56.111" //the master actively tries to connect to the client
1461 object Endpoint "icinga2-client2.localdomain" {
1462 host = "192.168.56.112" //the master actively tries to connect to the client
1465 object Zone "master" {
1466 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1469 object Zone "icinga2-client1.localdomain" {
1470 endpoints = [ "icinga2-client1.localdomain" ]
1475 object Zone "icinga2-client2.localdomain" {
1476 endpoints = [ "icinga2-client2.localdomain" ]
1481 /* sync global commands */
1482 object Zone "global-templates" {
1486 The two client nodes do not necessarily need to know about each other. The only important thing
1487 is that they know about the parent zone and their endpoint members (and optionally about the global zone).
1489 If you specify the `host` attribute in the `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1490 endpoint objects, the client will actively try to connect to the master node. Since we've specified the client
1491 endpoint's attribute on the master node already, we don't want the clients to connect to the
1492 master nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1494 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1496 object Endpoint "icinga2-master1.localdomain" {
1497 //do not actively connect to the master by leaving out the 'host' attribute
1500 object Endpoint "icinga2-master2.localdomain" {
1501 //do not actively connect to the master by leaving out the 'host' attribute
1504 object Endpoint "icinga2-client1.localdomain" {
1507 object Zone "master" {
1508 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1511 object Zone "icinga2-client1.localdomain" {
1512 endpoints = [ "icinga2-client1.localdomain" ]
1517 /* sync global commands */
1518 object Zone "global-templates" {
1522 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1524 object Endpoint "icinga2-master1.localdomain" {
1525 //do not actively connect to the master by leaving out the 'host' attribute
1528 object Endpoint "icinga2-master2.localdomain" {
1529 //do not actively connect to the master by leaving out the 'host' attribute
1532 object Endpoint "icinga2-client2.localdomain" {
1535 object Zone "master" {
1536 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1539 object Zone "icinga2-client2.localdomain" {
1540 endpoints = [ "icinga2-client2.localdomain" ]
1545 /* sync global commands */
1546 object Zone "global-templates" {
1550 Now it is time to define the two client hosts and apply service checks using
1551 the command endpoint execution method. Note: You can also use the
1552 config sync mode here.
1554 Create a new configuration directory on the master node `icinga2-master1.localdomain`.
1555 **Note**: The secondary master node `icinga2-master2.localdomain` receives the
1556 configuration using the [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync).
1558 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1560 Add the two client nodes as host objects:
1562 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1563 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1565 object Host "icinga2-client1.localdomain" {
1566 check_command = "hostalive"
1567 address = "192.168.56.111"
1568 vars.client_endpoint = name //follows the convention that host name == endpoint name
1571 object Host "icinga2-client2.localdomain" {
1572 check_command = "hostalive"
1573 address = "192.168.56.112"
1574 vars.client_endpoint = name //follows the convention that host name == endpoint name
1577 Add services using command endpoint checks:
1579 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1581 apply Service "ping4" {
1582 check_command = "ping4"
1583 //check is executed on the master node
1584 assign where host.address
1587 apply Service "disk" {
1588 check_command = "disk"
1590 //specify where the check is executed
1591 command_endpoint = host.vars.client_endpoint
1593 assign where host.vars.client_endpoint
1596 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1598 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1599 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1601 Open Icinga Web 2 and check the two newly created client hosts with two new services
1602 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1604 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1605 to make sure that your cluster notifies you in case of failure.
1608 ### Three Levels with Master, Satellites, and Clients <a id="distributed-monitoring-scenarios-master-satellite-client"></a>
1610 ![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
1612 This scenario combines everything you've learned so far: High-availability masters,
1613 satellites receiving their configuration from the master zone, and clients checked via command
1614 endpoint from the satellite zones.
1616 **Tip**: It can get complicated, so grab a pen and paper and bring your thoughts to life.
1617 Play around with a test setup before using it in a production environment!
1621 * `icinga2-master1.localdomain` is the configuration master master node.
1622 * `icinga2-master2.localdomain` is the secondary master master node without configuration in `zones.d`.
1623 * `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone.
1624 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1628 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1629 * Set up `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1630 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1632 When being asked for the master endpoint providing CSR auto-signing capabilities,
1633 please add the master node which holds the CA and has the `ApiListener` feature configured and enabled.
1634 The parent endpoint must still remain the satellite endpoint name.
1636 Example for `icinga2-client1.localdomain`:
1638 Please specify the master endpoint(s) this node should connect to:
1640 Master is the first satellite `icinga2-satellite1.localdomain`:
1642 Master Common Name (CN from your master setup): icinga2-satellite1.localdomain
1643 Do you want to establish a connection to the master from this node? [Y/n]: y
1644 Please fill out the master connection information:
1645 Master endpoint host (Your master's IP address or FQDN): 192.168.56.105
1646 Master endpoint port [5665]:
1648 Add the second satellite `icinga2-satellite2.localdomain` as master:
1650 Add more master endpoints? [y/N]: y
1651 Master Common Name (CN from your master setup): icinga2-satellite2.localdomain
1652 Do you want to establish a connection to the master from this node? [Y/n]: y
1653 Please fill out the master connection information:
1654 Master endpoint host (Your master's IP address or FQDN): 192.168.56.106
1655 Master endpoint port [5665]:
1656 Add more master endpoints? [y/N]: n
1658 Specify the master node `icinga2-master2.localdomain` with the CA private key and ticket salt:
1660 Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
1661 Host [192.168.56.106]: icinga2-master1.localdomain
1664 In case you cannot connect to the master node from your clients, you'll manually need
1665 to [generate the SSL certificates](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
1666 and modify the configuration accordingly.
1668 We'll discuss the details of the required configuration below.
1670 The zone hierarchy can look like this. We'll define only the directly connected zones here.
1672 You can safely deploy this configuration onto all master and satellite zone
1673 members. You should keep in mind to control the endpoint [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction)
1674 using the `host` attribute.
1676 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1678 object Endpoint "icinga2-master1.localdomain" {
1679 host = "192.168.56.101"
1682 object Endpoint "icinga2-master2.localdomain" {
1683 host = "192.168.56.102"
1686 object Endpoint "icinga2-satellite1.localdomain" {
1687 host = "192.168.56.105"
1690 object Endpoint "icinga2-satellite2.localdomain" {
1691 host = "192.168.56.106"
1694 object Zone "master" {
1695 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1698 object Zone "satellite" {
1699 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1704 /* sync global commands */
1705 object Zone "global-templates" {
1709 Repeat the configuration step for `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain`
1710 and `icinga2-satellite2.localdomain`.
1712 Since we want to use [top down command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) checks,
1713 we must configure the client endpoint and zone objects.
1714 In order to minimize the effort, we'll sync the client zone and endpoint configuration to the
1715 satellites where the connection information is needed as well.
1717 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/{master,satellite,global-templates}
1718 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1720 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1722 object Endpoint "icinga2-client1.localdomain" {
1723 host = "192.168.56.111" //the satellite actively tries to connect to the client
1726 object Zone "icinga2-client1.localdomain" {
1727 endpoints = [ "icinga2-client1.localdomain" ]
1729 parent = "satellite"
1732 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1734 object Endpoint "icinga2-client2.localdomain" {
1735 host = "192.168.56.112" //the satellite actively tries to connect to the client
1738 object Zone "icinga2-client2.localdomain" {
1739 endpoints = [ "icinga2-client2.localdomain" ]
1741 parent = "satellite"
1744 The two client nodes do not necessarily need to know about each other, either. The only important thing
1745 is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
1747 If you specify the `host` attribute in the `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain`
1748 endpoint objects, the client node will actively try to connect to the satellite node. Since we've specified the client
1749 endpoint's attribute on the satellite node already, we don't want the client node to connect to the
1750 satellite nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1752 Example for `icinga2-client1.localdomain`:
1754 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1756 object Endpoint "icinga2-satellite1.localdomain" {
1757 //do not actively connect to the satellite by leaving out the 'host' attribute
1760 object Endpoint "icinga2-satellite2.localdomain" {
1761 //do not actively connect to the satellite by leaving out the 'host' attribute
1764 object Endpoint "icinga2-client1.localdomain" {
1768 object Zone "satellite" {
1769 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1772 object Zone "icinga2-client1.localdomain" {
1773 endpoints = [ "icinga2-client1.localdomain" ]
1775 parent = "satellite"
1778 /* sync global commands */
1779 object Zone "global-templates" {
1783 Example for `icinga2-client2.localdomain`:
1785 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1787 object Endpoint "icinga2-satellite1.localdomain" {
1788 //do not actively connect to the satellite by leaving out the 'host' attribute
1791 object Endpoint "icinga2-satellite2.localdomain" {
1792 //do not actively connect to the satellite by leaving out the 'host' attribute
1795 object Endpoint "icinga2-client2.localdomain" {
1799 object Zone "satellite" {
1800 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1803 object Zone "icinga2-client2.localdomain" {
1804 endpoints = [ "icinga2-client2.localdomain" ]
1806 parent = "satellite"
1809 /* sync global commands */
1810 object Zone "global-templates" {
1814 Now it is time to define the two client hosts on the master, sync them to the satellites
1815 and apply service checks using the command endpoint execution method to them.
1816 Add the two client nodes as host objects to the `satellite` zone.
1818 We've already created the directories in `/etc/icinga2/zones.d` including the files for the
1819 zone and endpoint configuration for the clients.
1821 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1823 Add the host object configuration for the `icinga2-client1.localdomain` client. You should
1824 have created the configuration file in the previous steps and it should contain the endpoint
1825 and zone object configuration already.
1827 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1829 object Host "icinga2-client1.localdomain" {
1830 check_command = "hostalive"
1831 address = "192.168.56.111"
1832 vars.client_endpoint = name //follows the convention that host name == endpoint name
1835 Add the host object configuration for the `icinga2-client2.localdomain` client configuration file:
1837 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1839 object Host "icinga2-client2.localdomain" {
1840 check_command = "hostalive"
1841 address = "192.168.56.112"
1842 vars.client_endpoint = name //follows the convention that host name == endpoint name
1845 Add a service object which is executed on the satellite nodes (e.g. `ping4`). Pin the apply rule to the `satellite` zone only.
1847 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1849 apply Service "ping4" {
1850 check_command = "ping4"
1851 //check is executed on the satellite node
1852 assign where host.zone == "satellite" && host.address
1855 Add services using command endpoint checks. Pin the apply rules to the `satellite` zone only.
1857 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1859 apply Service "disk" {
1860 check_command = "disk"
1862 //specify where the check is executed
1863 command_endpoint = host.vars.client_endpoint
1865 assign where host.zone == "satellite" && host.vars.client_endpoint
1868 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1870 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1871 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1873 Open Icinga Web 2 and check the two newly created client hosts with two new services
1874 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1876 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1877 to make sure that your cluster notifies you in case of failure.
1879 ## Best Practice <a id="distributed-monitoring-best-practice"></a>
1881 We've put together a collection of configuration examples from community feedback.
1882 If you like to share your tips and tricks with us, please join the [community channels](https://www.icinga.com/community/get-involved/)!
1884 ### Global Zone for Config Sync <a id="distributed-monitoring-global-zone-config-sync"></a>
1886 Global zones can be used to sync generic configuration objects
1887 to all nodes depending on them. Common examples are:
1889 * Templates which are imported into zone specific objects.
1890 * Command objects referenced by Host, Service, Notification objects.
1891 * Apply rules for services, notifications and dependencies.
1892 * User objects referenced in notifications.
1894 * TimePeriod objects.
1896 Plugin scripts and binaries cannot be synced, this is for Icinga 2
1897 configuration files only. Use your preferred package repository
1898 and/or configuration management tool (Puppet, Ansible, Chef, etc.)
1901 **Note**: Checkable objects (hosts and services) cannot be put into a global
1902 zone. The configuration validation will terminate with an error.
1904 The zone object configuration must be deployed on all nodes which should receive
1905 the global configuration files:
1907 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1909 object Zone "global-templates" {
1913 Note: Packages >= 2.8 provide this configuration by default.
1915 Similar to the zone configuration sync you'll need to create a new directory in
1916 `/etc/icinga2/zones.d`:
1918 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1920 Next, add a new check command, for example:
1922 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1924 object CheckCommand "my-cmd" {
1928 Restart the client(s) which should receive the global zone before
1929 before restarting the parent master/satellite nodes.
1931 Then validate the configuration on the master node and restart Icinga 2.
1933 **Tip**: You can copy the example configuration files located in `/etc/icinga2/conf.d`
1934 into your global zone.
1938 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/conf.d
1939 [root@icinga2-master1.localdomain /etc/icinga2/conf.d]# cp {commands,groups,notifications,services,templates,timeperiods,users}.conf /etc/icinga2/zones.d/global-templates
1941 ### Health Checks <a id="distributed-monitoring-health-checks"></a>
1943 In case of network failures or other problems, your monitoring might
1944 either have late check results or just send out mass alarms for unknown
1947 In order to minimize the problems caused by this, you should configure
1948 additional health checks.
1950 The `cluster` check, for example, will check if all endpoints in the current zone and the directly
1951 connected zones are working properly:
1953 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1954 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
1956 object Host "icinga2-master1.localdomain" {
1957 check_command = "hostalive"
1958 address = "192.168.56.101"
1961 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1963 object Service "cluster" {
1964 check_command = "cluster"
1968 host_name = "icinga2-master1.localdomain"
1971 The `cluster-zone` check will test whether the configured target zone is currently
1972 connected or not. This example adds a health check for the [ha master with clients scenario](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
1974 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
1976 apply Service "cluster-health" {
1977 check_command = "cluster-zone"
1979 display_name = "cluster-health-" + host.name
1981 /* This follows the convention that the client zone name is the FQDN which is the same as the host object name. */
1982 vars.cluster_zone = host.name
1984 assign where host.vars.client_endpoint
1987 In case you cannot assign the `cluster_zone` attribute, add specific
1988 checks to your cluster:
1990 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1992 object Service "cluster-zone-satellite" {
1993 check_command = "cluster-zone"
1996 vars.cluster_zone = "satellite"
1998 host_name = "icinga2-master1.localdomain"
2002 If you are using top down checks with command endpoint configuration, you can
2003 add a dependency which prevents notifications for all other failing services:
2005 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/dependencies.conf
2007 apply Dependency "health-check" to Service {
2008 parent_service_name = "child-health"
2011 disable_notifications = true
2013 assign where host.vars.client_endpoint
2014 ignore where service.name == "child-health"
2017 ### Pin Checks in a Zone <a id="distributed-monitoring-pin-checks-zone"></a>
2019 In case you want to pin specific checks to their endpoints in a given zone you'll need to use
2020 the `command_endpoint` attribute. This is reasonable if you want to
2021 execute a local disk check in the `master` Zone on a specific endpoint then.
2023 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2024 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2026 object Host "icinga2-master1.localdomain" {
2027 check_command = "hostalive"
2028 address = "192.168.56.101"
2031 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2033 apply Service "disk" {
2034 check_command = "disk"
2036 command_endpoint = host.name //requires a host object matching the endpoint object name e.g. icinga2-master1.localdomain
2038 assign where host.zone == "master" && match("icinga2-master*", host.name)
2041 The `host.zone` attribute check inside the expression ensures that
2042 the service object is only created for host objects inside the `master`
2043 zone. In addition to that the [match](18-library-reference.md#global-functions-match)
2044 function ensures to only create services for the master nodes.
2046 ### Windows Firewall <a id="distributed-monitoring-windows-firewall"></a>
2048 #### ICMP Requests <a id="distributed-monitoring-windows-firewall-icmp"></a>
2050 By default ICMP requests are disabled in the Windows firewall. You can
2051 change that by [adding a new rule](https://support.microsoft.com/en-us/kb/947709).
2053 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
2055 #### Icinga 2 <a id="distributed-monitoring-windows-firewall-icinga2"></a>
2057 If your master/satellite nodes should actively connect to the Windows client
2058 you'll also need to ensure that port `5665` is enabled.
2060 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 5665 (Icinga 2)" dir=in action=allow protocol=TCP localport=5665
2062 #### NSClient++ API <a id="distributed-monitoring-windows-firewall-nsclient-api"></a>
2064 If the [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2065 plugin is used to query NSClient++, you need to ensure that its port is enabled.
2067 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 8443 (NSClient++ API)" dir=in action=allow protocol=TCP localport=8443
2069 For security reasons, it is advised to enable the NSClient++ HTTP API for local
2070 connection from the Icinga 2 client only. Remote connections to the HTTP API
2071 are not recommended with using the legacy HTTP API.
2073 ### Windows Client and Plugins <a id="distributed-monitoring-windows-plugins"></a>
2075 The Icinga 2 package on Windows already provides several plugins.
2076 Detailed [documentation](10-icinga-template-library.md#windows-plugins) is available for all check command definitions.
2078 Add the following `include` statement on all your nodes (master, satellite, client):
2080 vim /etc/icinga2/icinga2.conf
2082 include <windows-plugins>
2084 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2085 scenario we'll now add a local disk check.
2087 First, add the client node as host object:
2089 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2090 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2092 object Host "icinga2-client2.localdomain" {
2093 check_command = "hostalive"
2094 address = "192.168.56.112"
2095 vars.client_endpoint = name //follows the convention that host name == endpoint name
2096 vars.os_type = "windows"
2099 Next, add the disk check using command endpoint checks (details in the
2100 [disk-windows](10-icinga-template-library.md#windows-plugins-disk-windows) documentation):
2102 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2104 apply Service "disk C:" {
2105 check_command = "disk-windows"
2107 vars.disk_win_path = "C:"
2109 //specify where the check is executed
2110 command_endpoint = host.vars.client_endpoint
2112 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2115 Validate the configuration and restart Icinga 2.
2117 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2118 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2120 Open Icinga Web 2 and check your newly added Windows disk check :)
2122 ![Icinga 2 Client Windows](images/distributed-monitoring/icinga2_distributed_windows_client_disk_icingaweb2.png)
2124 If you want to add your own plugins please check [this chapter](05-service-monitoring.md#service-monitoring-requirements)
2125 for the requirements.
2127 ### Windows Client and NSClient++ <a id="distributed-monitoring-windows-nscp"></a>
2129 There are two methods available for querying NSClient++:
2131 * Query the [HTTP API](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api) locally from an Icinga 2 client (requires a running NSClient++ service)
2132 * Run a [local CLI check](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-local) (does not require NSClient++ as a service)
2134 Both methods have their advantages and disadvantages. One thing to
2135 note: If you rely on performance counter delta calculations such as
2136 CPU utilization, please use the HTTP API instead of the CLI sample call.
2138 #### NSCLient++ with check_nscp_api <a id="distributed-monitoring-windows-nscp-check-api"></a>
2140 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2141 you to install the NSClient++ package. In addition to the Windows plugins you can
2142 use the [nscp_api command](10-icinga-template-library.md#nscp-check-api) provided by the Icinga Template Library (ITL).
2144 The initial setup for the NSClient++ API and the required arguments
2145 is the described in the ITL chapter for the [nscp_api](10-icinga-template-library.md#nscp-check-api) CheckCommand.
2147 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2148 scenario we'll now add a local nscp check which queries the NSClient++ API to check the free disk space.
2150 Define a host object called `icinga2-client2.localdomain` on the master. Add the `nscp_api_password`
2151 custom attribute and specify the drives to check.
2153 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2154 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2156 object Host "icinga2-client1.localdomain" {
2157 check_command = "hostalive"
2158 address = "192.168.56.111"
2159 vars.client_endpoint = name //follows the convention that host name == endpoint name
2160 vars.os_type = "Windows"
2161 vars.nscp_api_password = "icinga"
2162 vars.drives = [ "C:", "D:" ]
2165 The service checks are generated using an [apply for](03-monitoring-basics.md#using-apply-for)
2166 rule based on `host.vars.drives`:
2168 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2170 apply Service "nscp-api-" for (drive in host.vars.drives) {
2171 import "generic-service"
2173 check_command = "nscp_api"
2174 command_endpoint = host.vars.client_endpoint
2176 //display_name = "nscp-drive-" + drive
2178 vars.nscp_api_host = "localhost"
2179 vars.nscp_api_query = "check_drivesize"
2180 vars.nscp_api_password = host.vars.nscp_api_password
2181 vars.nscp_api_arguments = [ "drive=" + drive ]
2183 ignore where host.vars.os_type != "Windows"
2186 Validate the configuration and restart Icinga 2.
2188 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2189 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2191 Two new services ("nscp-drive-D:" and "nscp-drive-C:") will be visible in Icinga Web 2.
2193 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-api](images/distributed-monitoring/icinga2_distributed_windows_nscp_api_drivesize_icingaweb2.png)
2195 Note: You can also omit the `command_endpoint` configuration to execute
2196 the command on the master. This also requires a different value for `nscp_api_host`
2197 which defaults to `host.address`.
2199 //command_endpoint = host.vars.client_endpoint
2201 //vars.nscp_api_host = "localhost"
2203 You can verify the check execution by looking at the `Check Source` attribute
2204 in Icinga Web 2 or the REST API.
2206 If you want to monitor specific Windows services, you could use the following example:
2208 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2209 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2211 object Host "icinga2-client1.localdomain" {
2212 check_command = "hostalive"
2213 address = "192.168.56.111"
2214 vars.client_endpoint = name //follows the convention that host name == endpoint name
2215 vars.os_type = "Windows"
2216 vars.nscp_api_password = "icinga"
2217 vars.services = [ "Windows Update", "wscsvc" ]
2220 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2222 apply Service "nscp-api-" for (svc in host.vars.services) {
2223 import "generic-service"
2225 check_command = "nscp_api"
2226 command_endpoint = host.vars.client_endpoint
2228 //display_name = "nscp-service-" + svc
2230 vars.nscp_api_host = "localhost"
2231 vars.nscp_api_query = "check_service"
2232 vars.nscp_api_password = host.vars.nscp_api_password
2233 vars.nscp_api_arguments = [ "service=" + svc ]
2235 ignore where host.vars.os_type != "Windows"
2238 #### NSCLient++ with nscp-local <a id="distributed-monitoring-windows-nscp-check-local"></a>
2240 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2241 you to install the NSClient++ package. In addition to the Windows plugins you can
2242 use the [nscp-local commands](10-icinga-template-library.md#nscp-plugin-check-commands)
2243 provided by the Icinga Template Library (ITL).
2245 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++](images/distributed-monitoring/icinga2_distributed_windows_nscp.png)
2247 Add the following `include` statement on all your nodes (master, satellite, client):
2249 vim /etc/icinga2/icinga2.conf
2253 The CheckCommand definitions will automatically determine the installed path
2254 to the `nscp.exe` binary.
2256 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2257 scenario we'll now add a local nscp check querying a given performance counter.
2259 First, add the client node as host object:
2261 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2262 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2264 object Host "icinga2-client1.localdomain" {
2265 check_command = "hostalive"
2266 address = "192.168.56.111"
2267 vars.client_endpoint = name //follows the convention that host name == endpoint name
2268 vars.os_type = "windows"
2271 Next, add a performance counter check using command endpoint checks (details in the
2272 [nscp-local-counter](10-icinga-template-library.md#nscp-check-local-counter) documentation):
2274 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2276 apply Service "nscp-local-counter-cpu" {
2277 check_command = "nscp-local-counter"
2278 command_endpoint = host.vars.client_endpoint
2280 vars.nscp_counter_name = "\\Processor(_total)\\% Processor Time"
2281 vars.nscp_counter_perfsyntax = "Total Processor Time"
2282 vars.nscp_counter_warning = 1
2283 vars.nscp_counter_critical = 5
2285 vars.nscp_counter_showall = true
2287 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2290 Validate the configuration and restart Icinga 2.
2292 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2293 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2295 Open Icinga Web 2 and check your newly added Windows NSClient++ check :)
2297 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-local](images/distributed-monitoring/icinga2_distributed_windows_nscp_counter_icingaweb2.png)
2299 ## Advanced Hints <a id="distributed-monitoring-advanced-hints"></a>
2301 You can find additional hints in this section if you prefer to go your own route
2302 with automating setups (setup, certificates, configuration).
2304 ### Certificate Auto-Renewal <a id="distributed-monitoring-certificate-auto-renewal"></a>
2306 Icinga 2 v2.8+ adds the possibility that nodes request certificate updates
2307 on their own. If their expiration date is soon enough, they automatically
2308 renew their already signed certificate by sending a signing request to the
2311 ### High-Availability for Icinga 2 Features <a id="distributed-monitoring-high-availability-features"></a>
2313 All nodes in the same zone require that you enable the same features for high-availability (HA).
2315 By default, the following features provide advanced HA functionality:
2317 * [Checks](06-distributed-monitoring.md#distributed-monitoring-high-availability-checks) (load balanced, automated failover).
2318 * [Notifications](06-distributed-monitoring.md#distributed-monitoring-high-availability-notifications) (load balanced, automated failover).
2319 * [DB IDO](06-distributed-monitoring.md#distributed-monitoring-high-availability-db-ido) (Run-Once, automated failover).
2321 #### High-Availability with Checks <a id="distributed-monitoring-high-availability-checks"></a>
2323 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2324 have the `checker` feature enabled.
2328 # icinga2 feature enable checker
2330 All nodes in the same zone load-balance the check execution. If one instance shuts down,
2331 the other nodes will automatically take over the remaining checks.
2333 #### High-Availability with Notifications <a id="distributed-monitoring-high-availability-notifications"></a>
2335 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2336 have the `notification` feature enabled.
2340 # icinga2 feature enable notification
2342 Notifications are load-balanced amongst all nodes in a zone. By default this functionality
2344 If your nodes should send out notifications independently from any other nodes (this will cause
2345 duplicated notifications if not properly handled!), you can set `enable_ha = false`
2346 in the [NotificationComponent](09-object-types.md#objecttype-notificationcomponent) feature.
2348 #### High-Availability with DB IDO <a id="distributed-monitoring-high-availability-db-ido"></a>
2350 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2351 have the DB IDO feature enabled.
2353 Example DB IDO MySQL:
2355 # icinga2 feature enable ido-mysql
2357 By default the DB IDO feature only runs on one node. All other nodes in the same zone disable
2358 the active IDO database connection at runtime. The node with the active DB IDO connection is
2359 not necessarily the zone master.
2361 **Note**: The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
2362 for the [IdoMysqlConnection](09-object-types.md#objecttype-idomysqlconnection) or
2363 [IdoPgsqlConnection](09-object-types.md#objecttype-idopgsqlconnection) object on **all** nodes in the
2366 All endpoints will enable the DB IDO feature and connect to the configured
2367 database and dump configuration, status and historical data on their own.
2369 If the instance with the active DB IDO connection dies, the HA functionality will
2370 automatically elect a new DB IDO master.
2372 The DB IDO feature will try to determine which cluster endpoint is currently writing
2373 to the database and bail out if another endpoint is active. You can manually verify that
2374 by running the following query command:
2376 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
2377 status_update_time | endpoint_name
2378 ------------------------+---------------
2379 2016-08-15 15:52:26+02 | icinga2-master1.localdomain
2382 This is useful when the cluster connection between endpoints breaks, and prevents
2383 data duplication in split-brain-scenarios. The failover timeout can be set for the
2384 `failover_timeout` attribute, but not lower than 60 seconds.
2386 ### Endpoint Connection Direction <a id="distributed-monitoring-advanced-hints-connection-direction"></a>
2388 Nodes will attempt to connect to another node when its local [Endpoint](09-object-types.md#objecttype-endpoint) object
2389 configuration specifies a valid `host` attribute (FQDN or IP address).
2391 Example for the master node `icinga2-master1.localdomain` actively connecting
2392 to the client node `icinga2-client1.localdomain`:
2394 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2398 object Endpoint "icinga2-client1.localdomain" {
2399 host = "192.168.56.111" //the master actively tries to connect to the client
2403 Example for the client node `icinga2-client1.localdomain` not actively
2404 connecting to the master node `icinga2-master1.localdomain`:
2406 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2410 object Endpoint "icinga2-master1.localdomain" {
2411 //do not actively connect to the master by leaving out the 'host' attribute
2415 It is not necessary that both the master and the client node establish
2416 two connections to each other. Icinga 2 will only use one connection
2417 and close the second connection if established.
2419 **Tip**: Choose either to let master/satellite nodes connect to client nodes
2423 ### Disable Log Duration for Command Endpoints <a id="distributed-monitoring-advanced-hints-command-endpoint-log-duration"></a>
2425 The replay log is a built-in mechanism to ensure that nodes in a distributed setup
2426 keep the same history (check results, notifications, etc.) when nodes are temporarily
2427 disconnected and then reconnect.
2429 This functionality is not needed when a master/satellite node is sending check
2430 execution events to a client which is purely configured for [command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2433 The [Endpoint](09-object-types.md#objecttype-endpoint) object attribute `log_duration` can
2434 be lower or set to 0 to fully disable any log replay updates when the
2435 client is not connected.
2437 Configuration on the master node `icinga2-master1.localdomain`:
2439 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2443 object Endpoint "icinga2-client1.localdomain" {
2444 host = "192.168.56.111" //the master actively tries to connect to the client
2448 object Endpoint "icinga2-client2.localdomain" {
2449 host = "192.168.56.112" //the master actively tries to connect to the client
2453 Configuration on the client `icinga2-client1.localdomain`:
2455 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2459 object Endpoint "icinga2-master1.localdomain" {
2460 //do not actively connect to the master by leaving out the 'host' attribute
2464 object Endpoint "icinga2-master2.localdomain" {
2465 //do not actively connect to the master by leaving out the 'host' attribute
2469 ### CSR auto-signing with HA and multiple Level Cluster <a id="distributed-monitoring-advanced-hints-csr-autosigning-ha-satellites"></a>
2471 If you are using two masters in a High-Availability setup it can be necessary
2472 to allow both to sign requested certificates. Ensure to safely sync the following
2475 * `TicketSalt` constant in `constants.conf`.
2476 * `var/lib/icinga2/ca` directory.
2478 This also helps if you are using a [three level cluster](06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
2479 and your client nodes are not able to reach the CSR auto-signing master node(s).
2480 Make sure that the directory permissions for `/var/lib/icinga2/ca` are secure
2481 (not world readable).
2483 **Do not expose these private keys to anywhere else. This is a matter of security.**
2485 ### Manual Certificate Creation <a id="distributed-monitoring-advanced-hints-certificates-manual"></a>
2487 #### Create CA on the Master <a id="distributed-monitoring-advanced-hints-certificates-manual-ca"></a>
2489 Choose the host which should store the certificate authority (one of the master nodes).
2491 The first step is the creation of the certificate authority (CA) by running the following command
2494 [root@icinga2-master1.localdomain /root]# icinga2 pki new-ca
2496 #### Create CSR and Certificate <a id="distributed-monitoring-advanced-hints-certificates-manual-create"></a>
2498 Create a certificate signing request (CSR) for the local instance:
2501 [root@icinga2-master1.localdomain /root]# icinga2 pki new-cert --cn icinga2-master1.localdomain \
2502 --key icinga2-master1.localdomain.key \
2503 --csr icinga2-master1.localdomain.csr
2506 Sign the CSR with the previously created CA:
2509 [root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
2512 Repeat the steps for all instances in your setup.
2516 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2519 #### Copy Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-copy"></a>
2521 Copy the host's certificate files and the public CA certificate to `/var/lib/icinga2/certs`:
2524 [root@icinga2-master1.localdomain /root]# mkdir -p /var/lib/icinga2/certs
2525 [root@icinga2-master1.localdomain /root]# cp icinga2-master1.localdomain.{crt,key} /var/lib/icinga2/certs
2526 [root@icinga2-master1.localdomain /root]# cp /var/lib/icinga2/ca/ca.crt /var/lib/icinga2/certs
2529 Ensure that proper permissions are set (replace `icinga` with the Icinga 2 daemon user):
2532 [root@icinga2-master1.localdomain /root]# chown -R icinga:icinga /var/lib/icinga2/certs
2533 [root@icinga2-master1.localdomain /root]# chmod 600 /var/lib/icinga2/certs/*.key
2534 [root@icinga2-master1.localdomain /root]# chmod 644 /var/lib/icinga2/certs/*.crt
2537 The CA public and private key are stored in the `/var/lib/icinga2/ca` directory. Keep this path secure and include
2540 #### Create Multiple Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-multiple"></a>
2542 Use your preferred method to automate the certificate generation process.
2545 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do icinga2 pki new-cert --cn $node --csr $node.csr --key $node.key; done
2546 information/base: Writing private key to 'icinga2-master1.localdomain.key'.
2547 information/base: Writing certificate signing request to 'icinga2-master1.localdomain.csr'.
2548 information/base: Writing private key to 'icinga2-master2.localdomain.key'.
2549 information/base: Writing certificate signing request to 'icinga2-master2.localdomain.csr'.
2550 information/base: Writing private key to 'icinga2-satellite1.localdomain.key'.
2551 information/base: Writing certificate signing request to 'icinga2-satellite1.localdomain.csr'.
2553 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do sudo icinga2 pki sign-csr --csr $node.csr --cert $node.crt; done
2554 information/pki: Writing certificate to file 'icinga2-master1.localdomain.crt'.
2555 information/pki: Writing certificate to file 'icinga2-master2.localdomain.crt'.
2556 information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt'.
2559 Copy and move these certificates to the respective instances e.g. with SSH/SCP.
2561 ## Automation <a id="distributed-monitoring-automation"></a>
2563 These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)
2564 or custom scripts for automated setup.
2566 These are collected best practices from various community channels.
2568 * [Silent Windows setup](06-distributed-monitoring.md#distributed-monitoring-automation-windows-silent)
2569 * [Node Setup CLI command](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup) with parameters
2571 If you prefer an alternate method, we still recommend leaving all the Icinga 2 features intact (e.g. `icinga2 feature enable api`).
2572 You should also use well known and documented default configuration file locations (e.g. `zones.conf`).
2573 This will tremendously help when someone is trying to help in the [community channels](https://www.icinga.com/community/get-involved/).
2576 ### Silent Windows Setup <a id="distributed-monitoring-automation-windows-silent"></a>
2578 If you want to install the client silently/unattended, use the `/qn` modifier. The
2579 installation should not trigger a restart, but if you want to be completely sure, you can use the `/norestart` modifier.
2581 C:> msiexec /i C:\Icinga2-v2.5.0-x86.msi /qn /norestart
2583 Once the setup is completed you can use the `node setup` cli command too.
2585 ### Node Setup using CLI Parameters <a id="distributed-monitoring-automation-cli-node-setup"></a>
2587 Instead of using the `node wizard` CLI command, there is an alternative `node setup`
2588 command available which has some prerequisites.
2590 **Note**: The CLI command can be used on Linux/Unix and Windows operating systems.
2591 The graphical Windows setup wizard actively uses these CLI commands.
2593 #### Node Setup on the Master Node <a id="distributed-monitoring-automation-cli-node-setup-master"></a>
2595 In case you want to setup a master node you must add the `--master` parameter
2596 to the `node setup` CLI command. In addition to that the `--cn` can optionally
2597 be passed (defaults to the FQDN).
2599 Parameter | Description
2600 --------------------|--------------------
2601 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN. Defaults to the FQDN.
2602 Zone name | **Optional.** Specified with the `--zone` parameter. Defaults to `master`.
2603 Listen on | **Optional.** Specified with the `--listen` parameter. Syntax is `host,port`.
2604 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive and adds the `api-users.conf` file inclusion to `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2609 [root@icinga2-master1.localdomain /]# icinga2 node setup --master
2612 In case you want to bind the `ApiListener` object to a specific
2613 host/port you can specify it like this:
2615 --listen 192.68.56.101,5665
2617 In case you don't need anything in `conf.d`, use the following command line:
2620 [root@icinga2-master1.localdomain /]# icinga2 node setup --master --disable-confd
2624 #### Node Setup with Satellites/Clients <a id="distributed-monitoring-automation-cli-node-setup-satellite-client"></a>
2628 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2631 Make sure that the `/var/lib/icinga2/certs` directory exists and is owned by the `icinga`
2632 user (or the user Icinga 2 is running as).
2634 [root@icinga2-client1.localdomain /]# mkdir -p /var/lib/icinga2/certs
2635 [root@icinga2-client1.localdomain /]# chown -R icinga:icinga /var/lib/icinga2/certs
2637 First you'll need to generate a new local self-signed certificate.
2638 Pass the following details to the `pki new-cert` CLI command:
2640 Parameter | Description
2641 --------------------|--------------------
2642 Common name (CN) | **Required.** By convention this should be the host's FQDN.
2643 Client certificate files | **Required.** These generated files will be put into the specified location (--key and --file). By convention this should be using `/var/lib/icinga2/certs` as directory.
2647 [root@icinga2-client1.localdomain /]# icinga2 pki new-cert --cn icinga2-client1.localdomain \
2648 --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2649 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt
2651 Request the master certificate from the master host (`icinga2-master1.localdomain`)
2652 and store it as `trusted-master.crt`. Review it and continue.
2654 Pass the following details to the `pki save-cert` CLI command:
2656 Parameter | Description
2657 --------------------|--------------------
2658 Client certificate files | **Required.** Pass the previously generated files using the `--key` and `--cert` parameters.
2659 Trusted parent certificate | **Required.** Store the parent's certificate file. Manually verify that you're trusting it.
2660 Parent host | **Required.** FQDN or IP address of the parent host.
2664 [root@icinga2-client1.localdomain /]# icinga2 pki save-cert --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2665 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt \
2666 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
2667 --host icinga2-master1.localdomain
2669 Continue with the additional node setup step. Specify a local endpoint and zone name (`icinga2-client1.localdomain`)
2670 and set the master host (`icinga2-master1.localdomain`) as parent zone configuration. Specify the path to
2671 the previously stored trusted master certificate.
2673 Pass the following details to the `node setup` CLI command:
2675 Parameter | Description
2676 --------------------|--------------------
2677 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN.
2678 Request ticket | **Required.** Add the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing).
2679 Trusted master certificate | **Required.** Add the previously fetched trusted master certificate (this step means that you've verified its origin).
2680 Parent host | **Optional.** FQDN or IP address of the parent host. This is where the command connects for CSR signing. If not specified, you need to manually copy the parent's public CA certificate file into `/var/lib/icinga2/certs/ca.crt` in order to start Icinga 2.
2681 Parent endpoint | **Required.** Specify the parent's endpoint name.
2682 Client zone name | **Required.** Specify the client's zone name.
2683 Parent zone name | **Optional.** Specify the parent's zone name.
2684 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)).
2685 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
2686 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`.
2687 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive in `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2691 > The `master_host` parameter is deprecated and will be removed in 2.10.0. Please use `--parent_host` instead.
2693 Example for Icinga 2 v2.9:
2696 [root@icinga2-client1.localdomain /]# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
2697 --cn icinga2-client1.localdomain \
2698 --endpoint icinga2-master1.localdomain \
2699 --zone icinga2-client1.localdomain \
2700 --parent_zone master \
2701 --parent_host icinga2-master1.localdomain \
2702 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
2703 --accept-commands --accept-config \
2707 In case the client should connect to the master node, you'll
2708 need to modify the `--endpoint` parameter using the format `cn,host,port`:
2710 --endpoint icinga2-master1.localdomain,192.168.56.101,5665
2712 Specify the parent zone using the `--parent_zone` parameter. This is useful
2713 if the client connects to a satellite, not the master instance.
2715 --parent_zone satellite
2717 In case the client should know the additional global zone `linux-templates`, you'll
2718 need to set the `--global-zones` parameter.
2720 --global_zones linux-templates
2722 The `--parent-host` parameter is optional since v2.9 and allows you to perform a connection-less setup.
2723 You cannot restart Icinga 2 yet, the CLI command asked to to manually copy the parent's public CA
2724 certificate file in `/var/lib/icinga2/certs/ca.crt`. Once Icinga 2 is started, it sends
2725 a ticket signing request to the parent node. If you have provided a ticket, the master node
2726 signs the request and sends it back to the client which performs a certificate update in-memory.
2728 In case you did not provide a ticket, you need to manually sign the CSR on the master node
2729 which holds the CA's key pair.
2732 **You can find additional best practices below.**
2734 Add an additional global zone. Please note the `>>` append mode.
2736 [root@icinga2-client1.localdomain /]# cat <<EOF >>/etc/icinga2/zones.conf
2737 object Zone "global-templates" {
2742 Note: Packages >= 2.8 provide this configuration by default.
2744 If this client node is configured as [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2745 you can safely disable the `checker` feature. The `node setup` CLI command already disabled the `notification` feature.
2747 [root@icinga2-client1.localdomain /]# icinga2 feature disable checker
2749 Disable "conf.d" inclusion if this is a [top down](06-distributed-monitoring.md#distributed-monitoring-top-down)
2752 [root@icinga2-client1.localdomain /]# sed -i 's/include_recursive "conf.d"/\/\/include_recursive "conf.d"/g' /etc/icinga2/icinga2.conf
2754 **Optional**: Add an ApiUser object configuration for remote troubleshooting.
2756 [root@icinga2-client1.localdomain /]# cat <<EOF >/etc/icinga2/conf.d/api-users.conf
2757 object ApiUser "root" {
2758 password = "clientsupersecretpassword"
2763 In case you've previously disabled the "conf.d" directory only
2764 add the file file `conf.d/api-users.conf`:
2766 [root@icinga2-client1.localdomain /]# echo 'include "conf.d/api-users.conf"' >> /etc/icinga2/icinga2.conf
2768 Finally restart Icinga 2.
2770 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
2772 Your automation tool must then configure master node in the meantime.
2773 Add the global zone `global-templates` in case it did not exist.
2775 # cat <<EOF >>/etc/icinga2/zones.conf
2776 object Endpoint "icinga2-client1.localdomain" {
2777 //client connects itself
2780 object Zone "icinga2-client1.localdomain" {
2781 endpoints = [ "icinga2-client1.localdomain" ]
2785 object Zone "global-templates" {