1 # Distributed Monitoring with Master, Satellites, and Clients <a id="distributed-monitoring"></a>
3 This chapter will guide you through the setup of a distributed monitoring
4 environment, including high-availability clustering and setup details
5 for the Icinga 2 client.
7 ## Roles: Master, Satellites, and Clients <a id="distributed-monitoring-roles"></a>
9 Icinga 2 nodes can be given names for easier understanding:
11 * A `master` node which is on top of the hierarchy.
12 * A `satellite` node which is a child of a `satellite` or `master` node.
13 * A `client` node which works as an `agent` connected to `master` and/or `satellite` nodes.
15 ![Icinga 2 Distributed Roles](images/distributed-monitoring/icinga2_distributed_roles.png)
17 Rephrasing this picture into more details:
19 * A `master` node has no parent node.
20 * A `master`node is where you usually install Icinga Web 2.
21 * A `master` node can combine executed checks from child nodes into backends and notifications.
22 * A `satellite` node has a parent and a child node.
23 * A `satellite` node may execute checks on its own or delegate check execution to child nodes.
24 * A `satellite` node can receive configuration for hosts/services, etc. from the parent node.
25 * A `satellite` node continues to run even if the master node is temporarily unavailable.
26 * A `client` node only has a parent node.
27 * A `client` node will either run its own configured checks or receive command execution events from the parent node.
29 The following sections will refer to these roles and explain the
30 differences and the possibilities this kind of setup offers.
32 **Tip**: If you just want to install a single master node that monitors several hosts
33 (i.e. Icinga 2 clients), continue reading -- we'll start with
35 In case you are planning a huge cluster setup with multiple levels and
36 lots of clients, read on -- we'll deal with these cases later on.
38 The installation on each system is the same: You need to install the
39 [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and the required [plugins](02-getting-started.md#setting-up-check-plugins).
41 The required configuration steps are mostly happening
42 on the command line. You can also [automate the setup](06-distributed-monitoring.md#distributed-monitoring-automation).
44 The first thing you need learn about a distributed setup is the hierarchy of the single components.
46 ## Zones <a id="distributed-monitoring-zones"></a>
48 The Icinga 2 hierarchy consists of so-called [zone](09-object-types.md#objecttype-zone) objects.
49 Zones depend on a parent-child relationship in order to trust each other.
51 ![Icinga 2 Distributed Zones](images/distributed-monitoring/icinga2_distributed_zones.png)
53 Have a look at this example for the `satellite` zones which have the `master` zone as a parent zone:
56 object Zone "master" {
60 object Zone "satellite region 1" {
65 object Zone "satellite region 2" {
71 There are certain limitations for child zones, e.g. their members are not allowed
72 to send configuration commands to the parent zone members. Vice versa, the
73 trust hierarchy allows for example the `master` zone to send
74 configuration files to the `satellite` zone. Read more about this
75 in the [security section](06-distributed-monitoring.md#distributed-monitoring-security).
77 `client` nodes also have their own unique zone. By convention you
78 can use the FQDN for the zone name.
80 ## Endpoints <a id="distributed-monitoring-endpoints"></a>
82 Nodes which are a member of a zone are so-called [Endpoint](09-object-types.md#objecttype-endpoint) objects.
84 ![Icinga 2 Distributed Endpoints](images/distributed-monitoring/icinga2_distributed_endpoints.png)
86 Here is an example configuration for two endpoints in different zones:
89 object Endpoint "icinga2-master1.localdomain" {
90 host = "192.168.56.101"
93 object Endpoint "icinga2-satellite1.localdomain" {
94 host = "192.168.56.105"
97 object Zone "master" {
98 endpoints = [ "icinga2-master1.localdomain" ]
101 object Zone "satellite" {
102 endpoints = [ "icinga2-satellite1.localdomain" ]
107 All endpoints in the same zone work as high-availability setup. For
108 example, if you have two nodes in the `master` zone, they will load-balance the check execution.
110 Endpoint objects are important for specifying the connection
111 information, e.g. if the master should actively try to connect to a client.
113 The zone membership is defined inside the `Zone` object definition using
114 the `endpoints` attribute with an array of `Endpoint` names.
118 > There is a known [problem](https://github.com/Icinga/icinga2/issues/3533)
119 > with >2 endpoints in a zone and a message routing loop.
120 > The config validation will log a warning to let you know about this too.
122 If you want to check the availability (e.g. ping checks) of the node
123 you still need a [Host](09-object-types.md#objecttype-host) object.
125 ## ApiListener <a id="distributed-monitoring-apilistener"></a>
127 In case you are using the CLI commands later, you don't have to write
128 this configuration from scratch in a text editor.
129 The [ApiListener](09-object-types.md#objecttype-apilistener) object is
130 used to load the TLS certificates and specify restrictions, e.g.
131 for accepting configuration commands.
133 It is also used for the [Icinga 2 REST API](12-icinga2-api.md#icinga2-api) which shares
134 the same host and port with the Icinga 2 Cluster protocol.
136 The object configuration is stored in the `/etc/icinga2/features-enabled/api.conf`
137 file. Depending on the configuration mode the attributes `accept_commands`
138 and `accept_config` can be configured here.
140 In order to use the `api` feature you need to enable it and restart Icinga 2.
143 icinga2 feature enable api
146 ## Conventions <a id="distributed-monitoring-conventions"></a>
148 By convention all nodes should be configured using their FQDN.
150 Furthermore, you must ensure that the following names
151 are exactly the same in all configuration files:
153 * Host certificate common name (CN).
154 * Endpoint configuration object for the host.
155 * NodeName constant for the local host.
157 Setting this up on the command line will help you to minimize the effort.
158 Just keep in mind that you need to use the FQDN for endpoints and for
159 common names when asked.
161 ## Security <a id="distributed-monitoring-security"></a>
163 While there are certain mechanisms to ensure a secure communication between all
164 nodes (firewalls, policies, software hardening, etc.), Icinga 2 also provides
167 * SSL certificates are mandatory for communication between nodes. The CLI commands
168 help you create those certificates.
169 * Child zones only receive updates (check results, commands, etc.) for their configured objects.
170 * Child zones are not allowed to push configuration updates to parent zones.
171 * Zones cannot interfere with other zones and influence each other. Each checkable host or service object is assigned to **one zone** only.
172 * All nodes in a zone trust each other.
173 * [Config sync](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync) and [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) is disabled by default.
175 The underlying protocol uses JSON-RPC event notifications exchanged by nodes.
176 The connection is secured by TLS. The message protocol uses an internal API,
177 and as such message types and names may change internally and are not documented.
179 Zones build the trust relationship in a distributed environment. If you do not specify
180 a zone for a client and specify the parent zone, its zone members e.g. the master instance
181 won't trust the client.
183 Building this trust is key in your distributed environment. That way the parent node
184 knows that it is able to send messages to the child zone, e.g. configuration objects,
185 configuration in global zones, commands to be executed in this zone/for this endpoint.
186 It also receives check results from the child zone for checkable objects (host/service).
188 Vice versa, the client trusts the master and accepts configuration and commands if enabled
189 in the api feature. If the client would send configuration to the parent zone, the parent nodes
190 will deny it. The parent zone is the configuration entity, and does not trust clients in this matter.
191 A client could attempt to modify a different client for example, or inject a check command
194 While it may sound complicated for client setups, it removes the problem with different roles
195 and configurations for a master and a client. Both of them work the same way, are configured
196 in the same way (Zone, Endpoint, ApiListener), and you can troubleshoot and debug them in just one go.
198 ## Versions and Upgrade <a id="distributed-monitoring-versions-upgrade"></a>
200 It generally is advised to use the newest releases with the same version on all instances.
201 Prior to upgrading, make sure to plan a maintenance window.
203 The Icinga project aims to allow the following compatibility:
206 master (2.11) >= satellite (2.10) >= clients (2.9)
209 Older client versions may work, but there's no guarantee. Always keep in mind that
210 older versions are out of support and can contain bugs.
212 In terms of an upgrade, ensure that the master is upgraded first, then
213 involved satellites, and last the Icinga 2 clients. If you are on v2.10
214 currently, first upgrade the master instance(s) to 2.11, and then proceed
215 with the satellites. Things are getting easier with any sort of automation
216 tool (Puppet, Ansible, etc.).
218 Releases and new features may require you to upgrade master/satellite instances at once,
219 this is highlighted in the [upgrading docs](16-upgrading-icinga-2.md#upgrading-icinga-2) if needed.
220 One example is the CA Proxy and on-demand signing feature
221 available since v2.8 where all involved instances need this version
222 to function properly.
224 ## Master Setup <a id="distributed-monitoring-setup-master"></a>
226 This section explains how to install a central single master node using
227 the `node wizard` command. If you prefer to do an automated installation, please
228 refer to the [automated setup](06-distributed-monitoring.md#distributed-monitoring-automation) section.
230 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
231 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
234 **Note**: Windows is not supported for a master node setup.
236 The next step is to run the `node wizard` CLI command. Prior to that
237 ensure to collect the required information:
239 Parameter | Description
240 --------------------|--------------------
241 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
242 Master zone name | **Optional.** Allows to specify the master zone name. Defaults to `master`.
243 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
244 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
245 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
246 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `y`. Configuration on the master is discussed below.
248 The setup wizard will ensure that the following steps are taken:
250 * Enable the `api` feature.
251 * Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if it doesn't exist.
252 * Create a certificate for this node signed by the CA key.
253 * Update the [zones.conf](04-configuring-icinga-2.md#zones-conf) file with the new zone hierarchy.
254 * Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuring-icinga-2.md#constants-conf) configuration.
255 * Update the [icinga2.conf](04-configuring-icinga-2.md#icinga2-conf) to disable the `conf.d` inclusion, and add the `api-users.conf` file inclusion.
257 Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
260 [root@icinga2-master1.localdomain /]# icinga2 node wizard
262 Welcome to the Icinga 2 Setup Wizard!
264 We will guide you through all required configuration details.
266 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]: n
268 Starting the Master setup routine...
270 Please specify the common name (CN) [icinga2-master1.localdomain]: icinga2-master1.localdomain
271 Reconfiguring Icinga...
272 Checking for existing certificates for common name 'icinga2-master1.localdomain'...
273 Certificates not yet generated. Running 'api setup' now.
274 Generating master configuration for Icinga 2.
275 Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
277 Master zone name [master]:
279 Default global zones: global-templates director-global
280 Do you want to specify additional global zones? [y/N]: N
282 Please specify the API bind host/port (optional):
286 Do you want to disable the inclusion of the conf.d directory [Y/n]:
287 Disabling the inclusion of the conf.d directory...
288 Checking if the api-users.conf file exists...
292 Now restart your Icinga 2 daemon to finish the installation!
295 You can verify that the CA public and private keys are stored in the `/var/lib/icinga2/ca` directory.
296 Keep this path secure and include it in your [backups](02-getting-started.md#install-backup).
298 In case you lose the CA private key you have to generate a new CA for signing new client
299 certificate requests. You then have to also re-create new signed certificates for all
302 Once the master setup is complete, you can also use this node as primary [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing)
303 master. The following section will explain how to use the CLI commands in order to fetch their
304 signed certificate from this master node.
306 ## Signing Certificates on the Master <a id="distributed-monitoring-setup-sign-certificates-master"></a>
308 All certificates must be signed by the same certificate authority (CA). This ensures
309 that all nodes trust each other in a distributed monitoring environment.
311 This CA is generated during the [master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master)
312 and should be the same on all master instances.
314 You can avoid signing and deploying certificates [manually](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
315 by using built-in methods for auto-signing certificate signing requests (CSR):
317 * [CSR Auto-Signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) which uses a client ticket generated on the master as trust identifier.
318 * [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) which allows to sign pending certificate requests on the master.
320 Both methods are described in detail below.
324 > [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) is available in Icinga 2 v2.8+.
326 ### CSR Auto-Signing <a id="distributed-monitoring-setup-csr-auto-signing"></a>
328 A client which sends a certificate signing request (CSR) must authenticate itself
329 in a trusted way. The master generates a client ticket which is included in this request.
330 That way the master can verify that the request matches the previously trusted ticket
331 and sign the request.
335 > Icinga 2 v2.8 added the possibility to forward signing requests on a satellite
336 > to the master node. This is called `CA Proxy` in blog posts and design drafts.
337 > This functionality helps with the setup of [three level clusters](#06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
342 * Nodes can be installed by different users who have received the client ticket.
343 * No manual interaction necessary on the master node.
344 * Automation tools like Puppet, Ansible, etc. can retrieve the pre-generated ticket in their client catalog
345 and run the node setup directly.
349 * Tickets need to be generated on the master and copied to client setup wizards.
350 * No central signing management.
353 Setup wizards for satellite/client nodes will ask you for this specific client ticket.
355 There are two possible ways to retrieve the ticket:
357 * [CLI command](11-cli-commands.md#cli-command-pki) executed on the master node.
358 * [REST API](12-icinga2-api.md#icinga2-api) request against the master node.
360 Required information:
362 Parameter | Description
363 --------------------|--------------------
364 Common name (CN) | **Required.** The common name for the satellite/client. By convention this should be the FQDN.
366 The following example shows how to generate a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
369 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
372 Querying the [Icinga 2 API](12-icinga2-api.md#icinga2-api) on the master requires an [ApiUser](12-icinga2-api.md#icinga2-api-authentication)
373 object with at least the `actions/generate-ticket` permission.
376 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/conf.d/api-users.conf
378 object ApiUser "client-pki-ticket" {
379 password = "bea11beb7b810ea9ce6ea" //change this
380 permissions = [ "actions/generate-ticket" ]
383 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
385 Retrieve the ticket on the master node `icinga2-master1.localdomain` with `curl`, for example:
387 [root@icinga2-master1.localdomain /]# curl -k -s -u client-pki-ticket:bea11beb7b810ea9ce6ea -H 'Accept: application/json' \
388 -X POST 'https://localhost:5665/v1/actions/generate-ticket' -d '{ "cn": "icinga2-client1.localdomain" }'
391 Store that ticket number for the satellite/client setup below.
395 > Never expose the ticket salt and/or ApiUser credentials to your client nodes.
396 > Example: Retrieve the ticket on the Puppet master node and send the compiled catalog
397 > to the authorized Puppet agent node which will invoke the
398 > [automated setup steps](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup).
400 ### On-Demand CSR Signing <a id="distributed-monitoring-setup-on-demand-csr-signing"></a>
402 The client sends a certificate signing request to specified parent node without any
403 ticket. The admin on the master is responsible for reviewing and signing the requests
404 with the private CA key.
406 This could either be directly the master, or a satellite which forwards the request
407 to the signing master.
411 * Central certificate request signing management.
412 * No pre-generated ticket is required for client setups.
416 * Asynchronous step for automated deployments.
417 * Needs client verification on the master.
420 You can list pending certificate signing requests with the `ca list` CLI command.
423 [root@icinga2-master1.localdomain /]# icinga2 ca list
424 Fingerprint | Timestamp | Signed | Subject
425 -----------------------------------------------------------------|---------------------|--------|--------
426 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
429 In order to show all requests, use the `--all` parameter.
432 [root@icinga2-master1.localdomain /]# icinga2 ca list --all
433 Fingerprint | Timestamp | Signed | Subject
434 -----------------------------------------------------------------|---------------------|--------|--------
435 403da5b228df384f07f980f45ba50202529cded7c8182abf96740660caa09727 | 2017/09/06 17:02:40 | * | CN = icinga2-client1.localdomain
436 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
439 **Tip**: Add `--json` to the CLI command to retrieve the details in JSON format.
441 If you want to sign a specific request, you need to use the `ca sign` CLI command
442 and pass its fingerprint as argument.
445 [root@icinga2-master1.localdomain /]# icinga2 ca sign 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850
446 information/cli: Signed certificate for 'CN = icinga2-client2.localdomain'.
451 > `ca list` cannot be used as historical inventory. Certificate
452 > signing requests older than 1 week are automatically deleted.
454 ## Client/Satellite Setup <a id="distributed-monitoring-setup-satellite-client"></a>
456 This section describes the setup of a satellite and/or client connected to an
457 existing master node setup. If you haven't done so already, please [run the master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master).
459 Icinga 2 on the master node must be running and accepting connections on port `5665`.
462 ### Client/Satellite Setup on Linux <a id="distributed-monitoring-setup-client-linux"></a>
464 Please ensure that you've run all the steps mentioned in the [client/satellite section](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
466 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
467 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
470 The next step is to run the `node wizard` CLI command.
472 In this example we're generating a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
475 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
476 4f75d2ecd253575fe9180938ebff7cbca262f96e
479 Note: You don't need this step if you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
481 Start the wizard on the client `icinga2-client1.localdomain`:
484 [root@icinga2-client1.localdomain /]# icinga2 node wizard
486 Welcome to the Icinga 2 Setup Wizard!
488 We will guide you through all required configuration details.
491 Press `Enter` or add `y` to start a satellite or client setup.
494 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]:
497 Press `Enter` to use the proposed name in brackets, or add a specific common name (CN). By convention
498 this should be the FQDN.
501 Starting the Client/Satellite setup routine...
503 Please specify the common name (CN) [icinga2-client1.localdomain]: icinga2-client1.localdomain
506 Specify the direct parent for this node. This could be your primary master `icinga2-master1.localdomain`
507 or a satellite node in a multi level cluster scenario.
510 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
511 Master/Satellite Common Name (CN from your master/satellite node): icinga2-master1.localdomain
514 Press `Enter` or choose `y` to establish a connection to the parent node.
517 Do you want to establish a connection to the parent node from this node? [Y/n]:
522 > If this node cannot connect to the parent node, choose `n`. The setup
523 > wizard will provide instructions for this scenario -- signing questions are disabled then.
525 Add the connection details for `icinga2-master1.localdomain`.
528 Please specify the master/satellite connection information:
529 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.101
530 Master/Satellite endpoint port [5665]: 5665
533 You can add more parent nodes if necessary. Press `Enter` or choose `n`
534 if you don't want to add any. This comes in handy if you have more than one
535 parent node, e.g. two masters or two satellites.
538 Add more master/satellite endpoints? [y/N]:
541 Verify the parent node's certificate:
544 Parent certificate information:
546 Subject: CN = icinga2-master1.localdomain
547 Issuer: CN = Icinga CA
548 Valid From: Sep 7 13:41:24 2017 GMT
549 Valid Until: Sep 3 13:41:24 2032 GMT
550 Fingerprint: AC 99 8B 2B 3D B0 01 00 E5 21 FA 05 2E EC D5 A9 EF 9E AA E3
552 Is this information correct? [y/N]: y
555 The setup wizard fetches the parent node's certificate and ask
556 you to verify this information. This is to prevent MITM attacks or
557 any kind of untrusted parent relationship.
559 Note: The certificate is not fetched if you have chosen not to connect
562 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
565 Please specify the request ticket generated on your Icinga 2 master (optional).
566 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
567 4f75d2ecd253575fe9180938ebff7cbca262f96e
570 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
571 you can leave the ticket question blank.
573 Instead, Icinga 2 tells you to approve the request later on the master node.
576 No ticket was specified. Please approve the certificate signing request manually
577 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
580 You can optionally specify a different bind host and/or port.
583 Please specify the API bind host/port (optional):
588 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
589 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
592 Accept config from parent node? [y/N]: y
593 Accept commands from parent node? [y/N]: y
596 Next you can optionally specify the local and parent zone names. This will be reflected
597 in the generated zone configuration file.
599 Set the local zone name to something else, if you are installing a satellite or secondary master instance.
602 Local zone name [icinga2-client1.localdomain]:
605 Set the parent zone name to something else than `master` if this client connects to a satellite instance instead of the master.
608 Parent zone name [master]:
611 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
612 Press `Enter` or choose `n`, if you don't want to add any additional.
615 Reconfiguring Icinga...
617 Default global zones: global-templates director-global
618 Do you want to specify additional global zones? [y/N]: N
621 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
622 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
623 they receive configuration synced from the parent zone.
626 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
627 Disabling the inclusion of the conf.d directory...
631 The wizard proceeds and you are good to go.
636 Now restart your Icinga 2 daemon to finish the installation!
641 > If you have chosen not to connect to the parent node, you cannot start
642 > Icinga 2 yet. The wizard asked you to manually copy the master's public
643 > CA certificate file into `/var/lib/icinga2/certs/ca.crt`.
645 > You need to manually sign the CSR on the master node.
647 Restart Icinga 2 as requested.
650 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
653 Here is an overview of all parameters in detail:
655 Parameter | Description
656 --------------------|--------------------
657 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
658 Master common name | **Required.** Use the common name you've specified for your master node before.
659 Establish connection to the parent node | **Optional.** Whether the node should attempt to connect to the parent node or not. Defaults to `y`.
660 Master/Satellite endpoint host | **Required if the the client needs to connect to the master/satellite.** The parent endpoint's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
661 Master/Satellite endpoint port | **Optional if the the client needs to connect to the master/satellite.** The parent endpoints's listening port. This information is included in the `Endpoint` object configuration.
662 Add more master/satellite endpoints | **Optional.** If you have multiple master/satellite nodes configured, add them here.
663 Parent Certificate information | **Required.** Verify that the connecting host really is the requested master node.
664 Request ticket | **Optional.** Add the [ticket](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) generated on the master.
665 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
666 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
667 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
668 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
669 Local zone name | **Optional.** Allows to specify the name for the local zone. This comes in handy when this instance is a satellite, not a client. Defaults to the FQDN.
670 Parent zone name | **Optional.** Allows to specify the name for the parent zone. This is important if the client has a satellite instance as parent, not the master. Defaults to `master`.
671 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`. Defaults to `n`.
672 Disable conf.d | **Optional.** Allows to disable the inclusion of the `conf.d` directory which holds local example configuration. Clients should retrieve their configuration from the parent node, or act as command endpoint execution bridge. Defaults to `y`.
674 The setup wizard will ensure that the following steps are taken:
676 * Enable the `api` feature.
677 * Create a certificate signing request (CSR) for the local node.
678 * Request a signed certificate i(optional with the provided ticket number) on the master node.
679 * Allow to verify the parent node's certificate.
680 * Store the signed client certificate and ca.crt in `/var/lib/icinga2/certs`.
681 * Update the `zones.conf` file with the new zone hierarchy.
682 * Update `/etc/icinga2/features-enabled/api.conf` (`accept_config`, `accept_commands`) and `constants.conf`.
683 * Update `/etc/icinga2/icinga2.conf` and comment out `include_recursive "conf.d"`.
685 You can verify that the certificate files are stored in the `/var/lib/icinga2/certs` directory.
689 > If the client is not directly connected to the certificate signing master,
690 > signing requests and responses might need some minutes to fully update the client certificates.
692 > If you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
693 > certificates need to be signed on the master first. Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
695 Now that you've successfully installed a Linux/Unix satellite/client instance, please proceed to
696 the [configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
700 ### Client Setup on Windows <a id="distributed-monitoring-setup-client-windows"></a>
702 Download the MSI-Installer package from [https://packages.icinga.com/windows/](https://packages.icinga.com/windows/).
706 * Windows Vista/Server 2008 or higher
707 * Versions older than Windows 10/Server 2016 require the [Universal C Runtime for Windows](https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows)
708 * [Microsoft .NET Framework 4.6] or higher (https://www.microsoft.com/en-US/download/details.aspx?id=53344) for the setup wizard
710 The installer package includes the [NSClient++](https://www.nsclient.org/) package
711 so that Icinga 2 can use its built-in plugins. You can find more details in
712 [this chapter](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
713 The Windows package also installs native [monitoring plugin binaries](06-distributed-monitoring.md#distributed-monitoring-windows-plugins)
714 to get you started more easily.
718 > Please note that Icinga 2 was designed to run as light-weight client on Windows.
719 > There is no support for satellite instances.
721 #### Windows Client Setup Start <a id="distributed-monitoring-setup-client-windows-start"></a>
723 Run the MSI-Installer package and follow the instructions shown in the screenshots.
725 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_01.png)
726 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_02.png)
727 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_03.png)
728 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_04.png)
729 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_05.png)
731 The graphical installer offers to run the Icinga 2 setup wizard after the installation. Select
732 the check box to proceed.
736 > You can also run the Icinga 2 setup wizard from the Start menu later.
738 On a fresh installation the setup wizard guides you through the initial configuration.
739 It also provides a mechanism to send a certificate request to the [CSR signing master](distributed-monitoring-setup-sign-certificates-master).
741 The following configuration details are required:
743 Parameter | Description
744 --------------------|--------------------
745 Instance name | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
746 Setup ticket | **Optional.** Paste the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing). If left blank, the certificate request must be [signed on the master node](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
748 Fill in the required information and click `Add` to add a new master connection.
750 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_01.png)
752 Add the following details:
754 Parameter | Description
755 -------------------------------|-------------------------------
756 Instance name | **Required.** The master/satellite endpoint name where this client is a direct child of.
757 Master/Satellite endpoint host | **Required.** The master or satellite's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
758 Master/Satellite endpoint port | **Optional.** The master or satellite's listening port. This information is included in the `Endpoint` object configuration.
760 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02.png)
762 When needed you can add an additional global zone (the zones `global-templates` and `director-global` are added by default):
764 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02_global_zone.png)
766 Optionally enable the following settings:
768 Parameter | Description
769 ----------------------------------|----------------------------------
770 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
771 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
772 Run Icinga 2 service as this user | **Optional.** Specify a different Windows user. This defaults to `NT AUTHORITY\Network Service` and is required for more privileged service checks.
773 Install NSClient++ | **Optional.** The Windows installer bundles the NSClient++ installer for additional [plugin checks](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
774 Disable conf.d | **Optional.** Allows to disable the `include_recursive "conf.d"` directive except for the `api-users.conf` file in the `icinga2.conf` file. Defaults to `true`.
776 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_03.png)
778 Verify the certificate from the master/satellite instance where this node should connect to.
780 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_04.png)
783 #### Bundled NSClient++ Setup <a id="distributed-monitoring-setup-client-windows-nsclient"></a>
785 If you have chosen to install/update the NSClient++ package, the Icinga 2 setup wizard asks
788 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_01.png)
790 Choose the `Generic` setup.
792 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_02.png)
794 Choose the `Custom` setup type.
796 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_03.png)
798 NSClient++ does not install a sample configuration by default. Change this as shown in the screenshot.
800 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_04.png)
802 Generate a secure password and enable the web server module. **Note**: The webserver module is
803 available starting with NSClient++ 0.5.0. Icinga 2 v2.6+ is required which includes this version.
805 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_05.png)
807 Finish the installation.
809 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_06.png)
811 Open a web browser and navigate to `https://localhost:8443`. Enter the password you've configured
812 during the setup. In case you lost it, look into the `C:\Program Files\NSClient++\nsclient.ini`
815 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_07.png)
817 The NSClient++ REST API can be used to query metrics. [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
818 uses this transport method.
821 #### Finish Windows Client Setup <a id="distributed-monitoring-setup-client-windows-finish"></a>
823 Finish the Windows setup wizard.
825 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_with_ticket.png)
827 If you did not provide a setup ticket, you need to sign the certificate request on the master.
828 The setup wizards tells you to do so. The Icinga 2 service is running at this point already
829 and will automatically receive and update a signed client certificate.
831 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_no_ticket.png)
833 Icinga 2 is automatically started as a Windows service.
835 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_running_service.png)
837 The Icinga 2 configuration is stored inside the `C:\ProgramData\icinga2` directory.
838 Click `Examine Config` in the setup wizard to open a new Explorer window.
840 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_examine_config.png)
842 The configuration files can be modified with your favorite editor e.g. Notepad.
844 In order to use the [top down](06-distributed-monitoring.md#distributed-monitoring-top-down) client
845 configuration prepare the following steps.
847 You don't need any local configuration on the client except for
848 CheckCommand definitions which can be synced using the global zone
849 above. Therefore disable the inclusion of the `conf.d` directory
850 in the `icinga2.conf` file.
851 Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
852 the `icinga2.conf` file in your preferred editor. Remove or comment (`//`)
856 // Commented out, not required on a client with top down mode
857 //include_recursive "conf.d"
862 > Packages >= 2.9 provide an option in the setup wizard to disable this.
863 > Defaults to disabled.
865 Validate the configuration on Windows open an administrator terminal
866 and run the following command:
869 C:\WINDOWS\system32>cd "C:\Program Files\ICINGA2\sbin"
870 C:\Program Files\ICINGA2\sbin>icinga2.exe daemon -C
873 **Note**: You have to run this command in a shell with `administrator` privileges.
875 Now you need to restart the Icinga 2 service. Run `services.msc` from the start menu
876 and restart the `icinga2` service. Alternatively, you can use the `net {start,stop}` CLI commands.
878 ![Icinga 2 Windows Service Start/Stop](images/distributed-monitoring/icinga2_windows_cmd_admin_net_start_stop.png)
880 Now that you've successfully installed a Windows client, please proceed to
881 the [detailed configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
883 ## Configuration Modes <a id="distributed-monitoring-configuration-modes"></a>
885 There are different ways to ensure that the Icinga 2 cluster nodes execute
886 checks, send notifications, etc.
888 The preferred method is to configure monitoring objects on the master
889 and distribute the configuration to satellites and clients.
891 The following chapters will explain this in detail with hands-on manual configuration
892 examples. You should test and implement this once to fully understand how it works.
894 Once you are familiar with Icinga 2 and distributed monitoring, you
895 can start with additional integrations to manage and deploy your
898 * [Icinga Director](https://github.com/icinga/icingaweb2-module-director) provides a web interface to manage configuration and also allows to sync imported resources (CMDB, PuppetDB, etc.)
899 * [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
900 * [Puppet Module](https://github.com/Icinga/puppet-icinga2)
901 * [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
903 More details can be found [here](13-addons.md#configuration-tools).
905 ### Top Down <a id="distributed-monitoring-top-down"></a>
907 There are two different behaviors with check execution:
909 * Send a command execution event remotely: The scheduler still runs on the parent node.
910 * Sync the host/service objects directly to the child node: Checks are executed locally.
912 Again, technically it does not matter whether this is a `client` or a `satellite`
913 which is receiving configuration or command execution events.
915 ### Top Down Command Endpoint <a id="distributed-monitoring-top-down-command-endpoint"></a>
917 This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
918 The host/service object configuration is located on the master/satellite and the client only
919 needs the CheckCommand object definitions being used there.
921 Every endpoint has its own remote check queue. The amount of checks executed simultaneously
922 can be limited on the endpoint with the `MaxConcurrentChecks` constant defined in [constants.conf](04-configuring-icinga-2.md#constants-conf). Icinga 2 may discard check requests,
923 if the remote check queue is full.
925 ![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
929 * No local checks need to be defined on the child node (client).
930 * Light-weight remote check execution (asynchronous events).
931 * No [replay log](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-command-endpoint-log-duration) is necessary for the child node.
932 * Pin checks to specific endpoints (if the child zone consists of 2 endpoints).
936 * If the child node is not connected, no more checks are executed.
937 * Requires additional configuration attribute specified in host/service objects.
938 * Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
940 To make sure that all nodes involved will accept configuration and/or
941 commands, you need to configure the `Zone` and `Endpoint` hierarchy
944 * `icinga2-master1.localdomain` is the configuration master in this scenario.
945 * `icinga2-client1.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition, it receives the global check command configuration from the master.
947 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
949 The endpoint configuration could look like this, for example:
952 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
954 object Endpoint "icinga2-master1.localdomain" {
955 host = "192.168.56.101"
958 object Endpoint "icinga2-client1.localdomain" {
959 host = "192.168.56.111"
963 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
965 **Note**: Each client requires its own zone and endpoint configuration. Best practice
966 is to use the client's FQDN for all object names.
968 The `master` zone is a parent of the `icinga2-client1.localdomain` zone:
971 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
973 object Zone "master" {
974 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
977 object Zone "icinga2-client1.localdomain" {
978 endpoints = [ "icinga2-client1.localdomain" ]
980 parent = "master" //establish zone hierarchy
984 You don't need any local configuration on the client except for
985 CheckCommand definitions which can be synced using the global zone
986 above. Therefore disable the inclusion of the `conf.d` directory
987 in `/etc/icinga2/icinga2.conf`.
990 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/icinga2.conf
992 // Commented out, not required on a client as command endpoint
993 //include_recursive "conf.d"
998 > Packages >= 2.9 provide an option in the setup wizard to disable this.
999 > Defaults to disabled.
1001 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1004 Example on CentOS 7:
1007 [root@icinga2-client1.localdomain /]# icinga2 daemon -C
1008 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
1010 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1011 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1014 Once the clients have successfully connected, you are ready for the next step: **execute
1015 a remote check on the client using the command endpoint**.
1017 Include the host and service object configuration in the `master` zone
1018 -- this will help adding a secondary master for high-availability later.
1021 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1024 Add the host and service objects you want to monitor. There is
1025 no limitation for files and directories -- best practice is to
1026 sort things by type.
1028 By convention a master/satellite/client host object should use the same name as the endpoint object.
1029 You can also add multiple hosts which execute checks against remote services/clients.
1032 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1033 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1035 object Host "icinga2-client1.localdomain" {
1036 check_command = "hostalive" //check is executed on the master
1037 address = "192.168.56.111"
1039 vars.client_endpoint = name //follows the convention that host name == endpoint name
1043 Given that you are monitoring a Linux client, we'll add a remote [disk](10-icinga-template-library.md#plugin-check-command-disk)
1047 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1049 apply Service "disk" {
1050 check_command = "disk"
1052 //specify where the check is executed
1053 command_endpoint = host.vars.client_endpoint
1055 assign where host.vars.client_endpoint
1059 If you have your own custom `CheckCommand` definition, add it to the global zone:
1062 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1063 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1065 object CheckCommand "my-cmd" {
1070 Save the changes and validate the configuration on the master node:
1073 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1075 Restart the Icinga 2 daemon (example for CentOS 7):
1078 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1081 The following steps will happen:
1083 * Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
1084 * The `icinga2-master1.localdomain` node schedules and executes the checks.
1085 * The `icinga2-client1.localdomain` node receives the execute command event with additional command parameters.
1086 * The `icinga2-client1.localdomain` node maps the command parameters to the local check command, executes the check locally, and sends back the check result message.
1088 As you can see, no interaction from your side is required on the client itself, and it's not necessary to reload the Icinga 2 service on the client.
1090 You have learned the basics about command endpoint checks. Proceed with
1091 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1092 section where you can find detailed information on extending the setup.
1095 ### Top Down Config Sync <a id="distributed-monitoring-top-down-config-sync"></a>
1097 This mode syncs the object configuration files within specified zones.
1098 It comes in handy if you want to configure everything on the master node
1099 and sync the satellite checks (disk, memory, etc.). The satellites run their
1100 own local scheduler and will send the check result messages back to the master.
1102 ![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
1106 * Sync the configuration files from the parent zone to the child zones.
1107 * No manual restart is required on the child nodes, as syncing, validation, and restarts happen automatically.
1108 * Execute checks directly on the child node's scheduler.
1109 * Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
1110 * Use a global zone for syncing templates, groups, etc.
1114 * Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
1115 * Additional zone and endpoint configuration needed.
1116 * Replay log is replicated on reconnect after connection loss. This might increase the data transfer and create an overload on the connection.
1118 To make sure that all involved nodes accept configuration and/or
1119 commands, you need to configure the `Zone` and `Endpoint` hierarchy
1122 * `icinga2-master1.localdomain` is the configuration master in this scenario.
1123 * `icinga2-client2.localdomain` acts as client which receives configuration from the master. Checks are scheduled locally.
1125 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
1127 The endpoint configuration could look like this:
1130 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1132 object Endpoint "icinga2-master1.localdomain" {
1133 host = "192.168.56.101"
1136 object Endpoint "icinga2-client2.localdomain" {
1137 host = "192.168.56.112"
1141 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
1143 **Note**: Each client requires its own zone and endpoint configuration. Best practice
1144 is to use the client's FQDN for all object names.
1146 The `master` zone is a parent of the `icinga2-client2.localdomain` zone:
1149 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1151 object Zone "master" {
1152 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
1155 object Zone "icinga2-client2.localdomain" {
1156 endpoints = [ "icinga2-client2.localdomain" ]
1158 parent = "master" //establish zone hierarchy
1162 Edit the `api` feature on the client `icinga2-client2.localdomain` in
1163 the `/etc/icinga2/features-enabled/api.conf` file and set
1164 `accept_config` to `true`.
1167 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
1169 object ApiListener "api" {
1171 accept_config = true
1175 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1178 Example on CentOS 7:
1181 [root@icinga2-client2.localdomain /]# icinga2 daemon -C
1182 [root@icinga2-client2.localdomain /]# systemctl restart icinga2
1184 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1185 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1188 **Tip**: Best practice is to use a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
1189 for common configuration items (check commands, templates, groups, etc.).
1191 Once the clients have connected successfully, it's time for the next step: **execute
1192 a local check on the client using the configuration sync**.
1194 Navigate to `/etc/icinga2/zones.d` on your master node
1195 `icinga2-master1.localdomain` and create a new directory with the same
1196 name as your satellite/client zone name:
1199 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/icinga2-client2.localdomain
1202 Add the host and service objects you want to monitor. There is
1203 no limitation for files and directories -- best practice is to
1204 sort things by type.
1206 By convention a master/satellite/client host object should use the same name as the endpoint object.
1207 You can also add multiple hosts which execute checks against remote services/clients.
1210 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/icinga2-client2.localdomain
1211 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim hosts.conf
1213 object Host "icinga2-client2.localdomain" {
1214 check_command = "hostalive"
1215 address = "192.168.56.112"
1216 zone = "master" //optional trick: sync the required host object to the client, but enforce the "master" zone to execute the check
1220 Given that you are monitoring a Linux client we'll just add a local [disk](10-icinga-template-library.md#plugin-check-command-disk)
1224 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim services.conf
1226 object Service "disk" {
1227 host_name = "icinga2-client2.localdomain"
1229 check_command = "disk"
1233 Save the changes and validate the configuration on the master node:
1236 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1239 Restart the Icinga 2 daemon (example for CentOS 7):
1242 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1245 The following steps will happen:
1247 * Icinga 2 validates the configuration on `icinga2-master1.localdomain`.
1248 * Icinga 2 copies the configuration into its zone config store in `/var/lib/icinga2/api/zones`.
1249 * The `icinga2-master1.localdomain` node sends a config update event to all endpoints in the same or direct child zones.
1250 * The `icinga2-client2.localdomain` node accepts config and populates the local zone config store with the received config files.
1251 * The `icinga2-client2.localdomain` node validates the configuration and automatically restarts.
1253 Again, there is no interaction required on the client
1256 You can also use the config sync inside a high-availability zone to
1257 ensure that all config objects are synced among zone members.
1259 **Note**: You can only have one so-called "config master" in a zone which stores
1260 the configuration in the `zones.d` directory.
1261 Multiple nodes with configuration files in the `zones.d` directory are
1264 Now that you've learned the basics about the configuration sync, proceed with
1265 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1266 section where you can find detailed information on extending the setup.
1270 If you are eager to start fresh instead you might take a look into the
1271 [Icinga Director](https://icinga.com/docs/director/latest/).
1273 ## Scenarios <a id="distributed-monitoring-scenarios"></a>
1275 The following examples should give you an idea on how to build your own
1276 distributed monitoring environment. We've seen them all in production
1277 environments and received feedback from our [community](https://community.icinga.com/)
1278 and [partner support](https://icinga.com/support/) channels:
1280 * [Single master with client](06-distributed-monitoring.md#distributed-monitoring-master-clients).
1281 * [HA master with clients as command endpoint](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients)
1282 * [Three level cluster](06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client) with config HA masters, satellites receiving config sync, and clients checked using command endpoint.
1284 You can also extend the cluster tree depth to four levels e.g. with 2 satellite levels.
1285 Just keep in mind that multiple levels become harder to debug in case of errors.
1287 You can also start with a single master setup, and later add a secondary
1288 master endpoint. This requires an extra step with the [initial sync](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-initial-sync)
1289 for cloning the runtime state. This is described in detail [here](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
1291 ### Master with Clients <a id="distributed-monitoring-master-clients"></a>
1293 In this scenario, a single master node runs the check scheduler, notifications
1294 and IDO database backend and uses the [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
1295 to execute checks on the remote clients.
1297 ![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
1299 * `icinga2-master1.localdomain` is the primary master node.
1300 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1304 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1305 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1307 Edit the `zones.conf` configuration file on the master:
1310 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1312 object Endpoint "icinga2-master1.localdomain" {
1315 object Endpoint "icinga2-client1.localdomain" {
1316 host = "192.168.56.111" //the master actively tries to connect to the client
1319 object Endpoint "icinga2-client2.localdomain" {
1320 host = "192.168.56.112" //the master actively tries to connect to the client
1323 object Zone "master" {
1324 endpoints = [ "icinga2-master1.localdomain" ]
1327 object Zone "icinga2-client1.localdomain" {
1328 endpoints = [ "icinga2-client1.localdomain" ]
1333 object Zone "icinga2-client2.localdomain" {
1334 endpoints = [ "icinga2-client2.localdomain" ]
1339 /* sync global commands */
1340 object Zone "global-templates" {
1345 The two client nodes do not necessarily need to know about each other. The only important thing
1346 is that they know about the parent zone and their endpoint members (and optionally the global zone).
1348 If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object,
1349 the client will actively try to connect to the master node. Since we've specified the client
1350 endpoint's attribute on the master node already, we don't want the clients to connect to the
1351 master. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1354 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1356 object Endpoint "icinga2-master1.localdomain" {
1357 //do not actively connect to the master by leaving out the 'host' attribute
1360 object Endpoint "icinga2-client1.localdomain" {
1363 object Zone "master" {
1364 endpoints = [ "icinga2-master1.localdomain" ]
1367 object Zone "icinga2-client1.localdomain" {
1368 endpoints = [ "icinga2-client1.localdomain" ]
1373 /* sync global commands */
1374 object Zone "global-templates" {
1378 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1380 object Endpoint "icinga2-master1.localdomain" {
1381 //do not actively connect to the master by leaving out the 'host' attribute
1384 object Endpoint "icinga2-client2.localdomain" {
1387 object Zone "master" {
1388 endpoints = [ "icinga2-master1.localdomain" ]
1391 object Zone "icinga2-client2.localdomain" {
1392 endpoints = [ "icinga2-client2.localdomain" ]
1397 /* sync global commands */
1398 object Zone "global-templates" {
1403 Now it is time to define the two client hosts and apply service checks using
1404 the command endpoint execution method on them. Note: You can also use the
1405 config sync mode here.
1407 Create a new configuration directory on the master node:
1410 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1413 Add the two client nodes as host objects:
1416 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1417 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1419 object Host "icinga2-client1.localdomain" {
1420 check_command = "hostalive"
1421 address = "192.168.56.111"
1422 vars.client_endpoint = name //follows the convention that host name == endpoint name
1425 object Host "icinga2-client2.localdomain" {
1426 check_command = "hostalive"
1427 address = "192.168.56.112"
1428 vars.client_endpoint = name //follows the convention that host name == endpoint name
1432 Add services using command endpoint checks:
1435 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1437 apply Service "ping4" {
1438 check_command = "ping4"
1439 //check is executed on the master node
1440 assign where host.address
1443 apply Service "disk" {
1444 check_command = "disk"
1446 //specify where the check is executed
1447 command_endpoint = host.vars.client_endpoint
1449 assign where host.vars.client_endpoint
1453 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1456 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1457 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1460 Open Icinga Web 2 and check the two newly created client hosts with two new services
1461 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1464 ### High-Availability Master with Clients <a id="distributed-monitoring-scenarios-ha-master-clients"></a>
1466 This scenario is similar to the one in the [previous section](06-distributed-monitoring.md#distributed-monitoring-master-clients). The only difference is that we will now set up two master nodes in a high-availability setup.
1467 These nodes must be configured as zone and endpoints objects.
1469 ![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
1471 The setup uses the capabilities of the Icinga 2 cluster. All zone members
1472 replicate cluster events amongst each other. In addition to that, several Icinga 2
1473 features can enable [HA functionality](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1475 Best practice is to run the database backend on a dedicated server/cluster and
1476 only expose a virtual IP address to Icinga and the IDO feature. By default, only one
1477 endpoint will actively write to the backend then. Typical setups for MySQL clusters
1478 involve Master-Master-Replication (Master-Slave-Replication in both directions) or Galera,
1479 more tips can be found on our [community forums](https://community.icinga.com/).
1481 **Note**: All nodes in the same zone require that you enable the same features for high-availability (HA).
1485 * `icinga2-master1.localdomain` is the config master master node.
1486 * `icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`.
1487 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1491 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1492 * Set up `icinga2-master2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1493 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, set to `y` and add the secondary master `icinga2-master2.localdomain`).
1495 In case you don't want to use the CLI commands, you can also manually create and sync the
1496 required SSL certificates. We will modify and discuss all the details of the automatically generated configuration here.
1498 Since there are now two nodes in the same zone, we must consider the
1499 [high-availability features](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1501 * Checks and notifications are balanced between the two master nodes. That's fine, but it requires check plugins and notification scripts to exist on both nodes.
1502 * The IDO feature will only be active on one node by default. Since all events are replicated between both nodes, it is easier to just have one central database.
1504 One possibility is to use a dedicated MySQL cluster VIP (external application cluster)
1505 and leave the IDO feature with enabled HA capabilities. Alternatively,
1506 you can disable the HA feature and write to a local database on each node.
1507 Both methods require that you configure Icinga Web 2 accordingly (monitoring
1508 backend, IDO database, used transports, etc.).
1512 > You can also start with a single master shown [here](06-distributed-monitoring.md#distributed-monitoring-master-clients) and later add
1513 > the second master. This requires an extra step with the [initial sync](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-initial-sync)
1514 > for cloning the runtime state after done. Once done, proceed here.
1516 The zone hierarchy could look like this. It involves putting the two master nodes
1517 `icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
1520 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1522 object Endpoint "icinga2-master1.localdomain" {
1523 host = "192.168.56.101"
1526 object Endpoint "icinga2-master2.localdomain" {
1527 host = "192.168.56.102"
1530 object Endpoint "icinga2-client1.localdomain" {
1531 host = "192.168.56.111" //the master actively tries to connect to the client
1534 object Endpoint "icinga2-client2.localdomain" {
1535 host = "192.168.56.112" //the master actively tries to connect to the client
1538 object Zone "master" {
1539 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1542 object Zone "icinga2-client1.localdomain" {
1543 endpoints = [ "icinga2-client1.localdomain" ]
1548 object Zone "icinga2-client2.localdomain" {
1549 endpoints = [ "icinga2-client2.localdomain" ]
1554 /* sync global commands */
1555 object Zone "global-templates" {
1560 The two client nodes do not necessarily need to know about each other. The only important thing
1561 is that they know about the parent zone and their endpoint members (and optionally about the global zone).
1563 If you specify the `host` attribute in the `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1564 endpoint objects, the client will actively try to connect to the master node. Since we've specified the client
1565 endpoint's attribute on the master node already, we don't want the clients to connect to the
1566 master nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1569 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1571 object Endpoint "icinga2-master1.localdomain" {
1572 //do not actively connect to the master by leaving out the 'host' attribute
1575 object Endpoint "icinga2-master2.localdomain" {
1576 //do not actively connect to the master by leaving out the 'host' attribute
1579 object Endpoint "icinga2-client1.localdomain" {
1582 object Zone "master" {
1583 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1586 object Zone "icinga2-client1.localdomain" {
1587 endpoints = [ "icinga2-client1.localdomain" ]
1592 /* sync global commands */
1593 object Zone "global-templates" {
1597 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1599 object Endpoint "icinga2-master1.localdomain" {
1600 //do not actively connect to the master by leaving out the 'host' attribute
1603 object Endpoint "icinga2-master2.localdomain" {
1604 //do not actively connect to the master by leaving out the 'host' attribute
1607 object Endpoint "icinga2-client2.localdomain" {
1610 object Zone "master" {
1611 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1614 object Zone "icinga2-client2.localdomain" {
1615 endpoints = [ "icinga2-client2.localdomain" ]
1620 /* sync global commands */
1621 object Zone "global-templates" {
1626 Now it is time to define the two client hosts and apply service checks using
1627 the command endpoint execution method. Note: You can also use the
1628 config sync mode here.
1630 Create a new configuration directory on the master node `icinga2-master1.localdomain`.
1631 **Note**: The secondary master node `icinga2-master2.localdomain` receives the
1632 configuration using the [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync).
1635 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1638 Add the two client nodes as host objects:
1641 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1642 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1644 object Host "icinga2-client1.localdomain" {
1645 check_command = "hostalive"
1646 address = "192.168.56.111"
1647 vars.client_endpoint = name //follows the convention that host name == endpoint name
1650 object Host "icinga2-client2.localdomain" {
1651 check_command = "hostalive"
1652 address = "192.168.56.112"
1653 vars.client_endpoint = name //follows the convention that host name == endpoint name
1657 Add services using command endpoint checks:
1660 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1662 apply Service "ping4" {
1663 check_command = "ping4"
1664 //check is executed on the master node
1665 assign where host.address
1668 apply Service "disk" {
1669 check_command = "disk"
1671 //specify where the check is executed
1672 command_endpoint = host.vars.client_endpoint
1674 assign where host.vars.client_endpoint
1678 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1681 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1682 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1685 Open Icinga Web 2 and check the two newly created client hosts with two new services
1686 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1688 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1689 to make sure that your cluster notifies you in case of failure.
1692 ### Three Levels with Master, Satellites, and Clients <a id="distributed-monitoring-scenarios-master-satellite-client"></a>
1694 This scenario combines everything you've learned so far: High-availability masters,
1695 satellites receiving their configuration from the master zone, and clients checked via command
1696 endpoint from the satellite zones.
1698 ![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
1702 > It can get complicated, so grab a pen and paper and bring your thoughts to life.
1703 > Play around with a test setup before using it in a production environment!
1705 Best practice is to run the database backend on a dedicated server/cluster and
1706 only expose a virtual IP address to Icinga and the IDO feature. By default, only one
1707 endpoint will actively write to the backend then. Typical setups for MySQL clusters
1708 involve Master-Master-Replication (Master-Slave-Replication in both directions) or Galera,
1709 more tips can be found on our [community forums](https://community.icinga.com/).
1713 * `icinga2-master1.localdomain` is the configuration master master node.
1714 * `icinga2-master2.localdomain` is the secondary master master node without configuration in `zones.d`.
1715 * `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone. They forward CSR signing requests to the master zone.
1716 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1720 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1721 * Set up `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1722 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1724 When being asked for the parent endpoint providing CSR auto-signing capabilities,
1725 please add one of the satellite nodes. **Note**: This requires Icinga 2 v2.8+
1726 and the `CA Proxy` on all master, satellite and client nodes.
1728 Example for `icinga2-client1.localdomain`:
1731 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
1734 Parent endpoint is the first satellite `icinga2-satellite1.localdomain`:
1737 Master/Satellite Common Name (CN from your master/satellite node): icinga2-satellite1.localdomain
1738 Do you want to establish a connection to the parent node from this node? [Y/n]: y
1740 Please specify the master/satellite connection information:
1741 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.105
1742 Master/Satellite endpoint port [5665]: 5665
1745 Add the second satellite `icinga2-satellite2.localdomain` as parent:
1748 Add more master/satellite endpoints? [y/N]: y
1750 Master/Satellite Common Name (CN from your master/satellite node): icinga2-satellite2.localdomain
1751 Do you want to establish a connection to the parent node from this node? [Y/n]: y
1753 Please specify the master/satellite connection information:
1754 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.106
1755 Master/Satellite endpoint port [5665]: 5665
1757 Add more master/satellite endpoints? [y/N]: n
1760 The specified parent nodes will forward the CSR signing request to the master instances.
1762 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
1765 Please specify the request ticket generated on your Icinga 2 master (optional).
1766 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
1767 4f75d2ecd253575fe9180938ebff7cbca262f96e
1770 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
1771 you can leave the ticket question blank.
1773 Instead, Icinga 2 tells you to approve the request later on the master node.
1776 No ticket was specified. Please approve the certificate signing request manually
1777 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
1780 You can optionally specify a different bind host and/or port.
1783 Please specify the API bind host/port (optional):
1788 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
1789 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
1792 Accept config from parent node? [y/N]: y
1793 Accept commands from parent node? [y/N]: y
1796 Next you can optionally specify the local and parent zone names. This will be reflected
1797 in the generated zone configuration file.
1800 Local zone name [icinga2-client1.localdomain]: icinga2-client1.localdomain
1803 Set the parent zone name to `satellite` for this client.
1806 Parent zone name [master]: satellite
1809 You can add more global zones in addition to `global-templates` and `director-global` if necessary.
1810 Press `Enter` or choose `n`, if you don't want to add any additional.
1813 Reconfiguring Icinga...
1815 Default global zones: global-templates director-global
1816 Do you want to specify additional global zones? [y/N]: N
1819 Last but not least the wizard asks you whether you want to disable the inclusion of the local configuration
1820 directory in `conf.d`, or not. Defaults to disabled, as clients either are checked via command endpoint, or
1821 they receive configuration synced from the parent zone.
1824 Do you want to disable the inclusion of the conf.d directory [Y/n]: Y
1825 Disabling the inclusion of the conf.d directory...
1829 **We'll discuss the details of the required configuration below. Most of this
1830 configuration can be rendered by the setup wizards.**
1832 The zone hierarchy can look like this. We'll define only the directly connected zones here.
1834 The master instances should actively connect to the satellite instances, therefore
1835 the configuration on `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1836 must include the `host` attribute for the satellite endpoints:
1839 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1841 object Endpoint "icinga2-master1.localdomain" {
1845 object Endpoint "icinga2-master2.localdomain" {
1846 host = "192.168.56.102"
1849 object Endpoint "icinga2-satellite1.localdomain" {
1850 host = "192.168.56.105"
1853 object Endpoint "icinga2-satellite2.localdomain" {
1854 host = "192.168.56.106"
1857 object Zone "master" {
1858 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1861 object Zone "satellite" {
1862 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1867 /* sync global commands */
1868 object Zone "global-templates" {
1872 object Zone "director-global" {
1878 In contrast to that, the satellite instances `icinga2-satellite1.localdomain`
1879 and `icinga2-satellite2.localdomain` should not actively connect to the master
1883 [root@icinga2-satellite1.localdomain /]# vim /etc/icinga2/zones.conf
1885 object Endpoint "icinga2-master1.localdomain" {
1886 //this endpoint will connect to us
1889 object Endpoint "icinga2-master2.localdomain" {
1890 //this endpoint will connect to us
1893 object Endpoint "icinga2-satellite1.localdomain" {
1897 object Endpoint "icinga2-satellite2.localdomain" {
1898 host = "192.168.56.106"
1901 object Zone "master" {
1902 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1905 object Zone "satellite" {
1906 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1911 /* sync global commands */
1912 object Zone "global-templates" {
1916 object Zone "director-global" {
1920 Keep in mind to control the endpoint [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction)
1921 using the `host` attribute, also for other endpoints in the same zone.
1923 Adopt the configuration for `icinga2-master2.localdomain` and `icinga2-satellite2.localdomain`.
1925 Since we want to use [top down command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) checks,
1926 we must configure the client endpoint and zone objects.
1927 In order to minimize the effort, we'll sync the client zone and endpoint configuration to the
1928 satellites where the connection information is needed as well.
1931 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/{master,satellite,global-templates}
1932 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1934 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1936 object Endpoint "icinga2-client1.localdomain" {
1937 host = "192.168.56.111" //the satellite actively tries to connect to the client
1940 object Zone "icinga2-client1.localdomain" {
1941 endpoints = [ "icinga2-client1.localdomain" ]
1943 parent = "satellite"
1946 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1948 object Endpoint "icinga2-client2.localdomain" {
1949 host = "192.168.56.112" //the satellite actively tries to connect to the client
1952 object Zone "icinga2-client2.localdomain" {
1953 endpoints = [ "icinga2-client2.localdomain" ]
1955 parent = "satellite"
1959 The two client nodes do not necessarily need to know about each other, either. The only important thing
1960 is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
1962 If you specify the `host` attribute in the `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain`
1963 endpoint objects, the client node will actively try to connect to the satellite node. Since we've specified the client
1964 endpoint's attribute on the satellite node already, we don't want the client node to connect to the
1965 satellite nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1967 Example for `icinga2-client1.localdomain`:
1970 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1972 object Endpoint "icinga2-satellite1.localdomain" {
1973 //do not actively connect to the satellite by leaving out the 'host' attribute
1976 object Endpoint "icinga2-satellite2.localdomain" {
1977 //do not actively connect to the satellite by leaving out the 'host' attribute
1980 object Endpoint "icinga2-client1.localdomain" {
1984 object Zone "satellite" {
1985 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1988 object Zone "icinga2-client1.localdomain" {
1989 endpoints = [ "icinga2-client1.localdomain" ]
1991 parent = "satellite"
1994 /* sync global commands */
1995 object Zone "global-templates" {
1999 object Zone "director-global" {
2004 Example for `icinga2-client2.localdomain`:
2007 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
2009 object Endpoint "icinga2-satellite1.localdomain" {
2010 //do not actively connect to the satellite by leaving out the 'host' attribute
2013 object Endpoint "icinga2-satellite2.localdomain" {
2014 //do not actively connect to the satellite by leaving out the 'host' attribute
2017 object Endpoint "icinga2-client2.localdomain" {
2021 object Zone "satellite" {
2022 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
2025 object Zone "icinga2-client2.localdomain" {
2026 endpoints = [ "icinga2-client2.localdomain" ]
2028 parent = "satellite"
2031 /* sync global commands */
2032 object Zone "global-templates" {
2036 object Zone "director-global" {
2041 Now it is time to define the two client hosts on the master, sync them to the satellites
2042 and apply service checks using the command endpoint execution method to them.
2043 Add the two client nodes as host objects to the `satellite` zone.
2045 We've already created the directories in `/etc/icinga2/zones.d` including the files for the
2046 zone and endpoint configuration for the clients.
2049 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
2052 Add the host object configuration for the `icinga2-client1.localdomain` client. You should
2053 have created the configuration file in the previous steps and it should contain the endpoint
2054 and zone object configuration already.
2057 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
2059 object Host "icinga2-client1.localdomain" {
2060 check_command = "hostalive"
2061 address = "192.168.56.111"
2062 vars.client_endpoint = name //follows the convention that host name == endpoint name
2066 Add the host object configuration for the `icinga2-client2.localdomain` client configuration file:
2069 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
2071 object Host "icinga2-client2.localdomain" {
2072 check_command = "hostalive"
2073 address = "192.168.56.112"
2074 vars.client_endpoint = name //follows the convention that host name == endpoint name
2078 Add a service object which is executed on the satellite nodes (e.g. `ping4`). Pin the apply rule to the `satellite` zone only.
2081 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
2083 apply Service "ping4" {
2084 check_command = "ping4"
2085 //check is executed on the satellite node
2086 assign where host.zone == "satellite" && host.address
2090 Add services using command endpoint checks. Pin the apply rules to the `satellite` zone only.
2093 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
2095 apply Service "disk" {
2096 check_command = "disk"
2098 //specify where the check is executed
2099 command_endpoint = host.vars.client_endpoint
2101 assign where host.zone == "satellite" && host.vars.client_endpoint
2105 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
2108 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2109 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2112 Open Icinga Web 2 and check the two newly created client hosts with two new services
2113 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
2117 > It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
2118 to make sure that your cluster notifies you in case of failure.
2120 ## Best Practice <a id="distributed-monitoring-best-practice"></a>
2122 We've put together a collection of configuration examples from community feedback.
2123 If you like to share your tips and tricks with us, please join the [community channels](https://icinga.com/community/)!
2125 ### Global Zone for Config Sync <a id="distributed-monitoring-global-zone-config-sync"></a>
2127 Global zones can be used to sync generic configuration objects
2128 to all nodes depending on them. Common examples are:
2130 * Templates which are imported into zone specific objects.
2131 * Command objects referenced by Host, Service, Notification objects.
2132 * Apply rules for services, notifications and dependencies.
2133 * User objects referenced in notifications.
2135 * TimePeriod objects.
2137 Plugin scripts and binaries cannot be synced, this is for Icinga 2
2138 configuration files only. Use your preferred package repository
2139 and/or configuration management tool (Puppet, Ansible, Chef, etc.)
2142 **Note**: Checkable objects (hosts and services) cannot be put into a global
2143 zone. The configuration validation will terminate with an error.
2145 The zone object configuration must be deployed on all nodes which should receive
2146 the global configuration files:
2149 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2151 object Zone "global-commands" {
2156 The default global zones generated by the setup wizards are called `global-templates` and `director-global`.
2158 While you can should `global-templates` for your global configuration, `director-global` is reserved for use
2159 by [Icinga Director](https://icinga.com/docs/director/latest/). Please don't
2160 place any configuration in it manually.
2162 Similar to the zone configuration sync you'll need to create a new directory in
2163 `/etc/icinga2/zones.d`:
2166 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-commands
2169 Next, add a new check command, for example:
2172 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-commands/web.conf
2174 object CheckCommand "webinject" {
2179 Restart the client(s) which should receive the global zone before
2180 before restarting the parent master/satellite nodes.
2182 Then validate the configuration on the master node and restart Icinga 2.
2184 **Tip**: You can copy the example configuration files located in `/etc/icinga2/conf.d`
2185 into the default global zone `global-templates`.
2190 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/conf.d
2191 [root@icinga2-master1.localdomain /etc/icinga2/conf.d]# cp {commands,groups,notifications,services,templates,timeperiods,users}.conf /etc/icinga2/zones.d/global-templates
2194 ### Health Checks <a id="distributed-monitoring-health-checks"></a>
2196 In case of network failures or other problems, your monitoring might
2197 either have late check results or just send out mass alarms for unknown
2200 In order to minimize the problems caused by this, you should configure
2201 additional health checks.
2203 The `cluster` check, for example, will check if all endpoints in the current zone and the directly
2204 connected zones are working properly:
2207 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2208 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2210 object Host "icinga2-master1.localdomain" {
2211 check_command = "hostalive"
2212 address = "192.168.56.101"
2215 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
2217 object Service "cluster" {
2218 check_command = "cluster"
2222 host_name = "icinga2-master1.localdomain"
2226 The `cluster-zone` check will test whether the configured target zone is currently
2227 connected or not. This example adds a health check for the [ha master with clients scenario](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
2230 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2232 apply Service "cluster-health" {
2233 check_command = "cluster-zone"
2235 display_name = "cluster-health-" + host.name
2237 /* This follows the convention that the client zone name is the FQDN which is the same as the host object name. */
2238 vars.cluster_zone = host.name
2240 assign where host.vars.client_endpoint
2244 In case you cannot assign the `cluster_zone` attribute, add specific
2245 checks to your cluster:
2248 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
2250 object Service "cluster-zone-satellite" {
2251 check_command = "cluster-zone"
2254 vars.cluster_zone = "satellite"
2256 host_name = "icinga2-master1.localdomain"
2260 If you are using top down checks with command endpoint configuration, you can
2261 add a dependency which prevents notifications for all other failing services:
2264 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/dependencies.conf
2266 apply Dependency "health-check" to Service {
2267 parent_service_name = "child-health"
2270 disable_notifications = true
2272 assign where host.vars.client_endpoint
2273 ignore where service.name == "child-health"
2277 ### Pin Checks in a Zone <a id="distributed-monitoring-pin-checks-zone"></a>
2279 In case you want to pin specific checks to their endpoints in a given zone you'll need to use
2280 the `command_endpoint` attribute. This is reasonable if you want to
2281 execute a local disk check in the `master` Zone on a specific endpoint then.
2284 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
2285 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
2287 object Host "icinga2-master1.localdomain" {
2288 check_command = "hostalive"
2289 address = "192.168.56.101"
2292 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
2294 apply Service "disk" {
2295 check_command = "disk"
2297 command_endpoint = host.name //requires a host object matching the endpoint object name e.g. icinga2-master1.localdomain
2299 assign where host.zone == "master" && match("icinga2-master*", host.name)
2303 The `host.zone` attribute check inside the expression ensures that
2304 the service object is only created for host objects inside the `master`
2305 zone. In addition to that the [match](18-library-reference.md#global-functions-match)
2306 function ensures to only create services for the master nodes.
2308 ### Windows Firewall <a id="distributed-monitoring-windows-firewall"></a>
2310 #### ICMP Requests <a id="distributed-monitoring-windows-firewall-icmp"></a>
2312 By default ICMP requests are disabled in the Windows firewall. You can
2313 change that by [adding a new rule](https://support.microsoft.com/en-us/kb/947709).
2316 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
2319 #### Icinga 2 <a id="distributed-monitoring-windows-firewall-icinga2"></a>
2321 If your master/satellite nodes should actively connect to the Windows client
2322 you'll also need to ensure that port `5665` is enabled.
2325 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 5665 (Icinga 2)" dir=in action=allow protocol=TCP localport=5665
2328 #### NSClient++ API <a id="distributed-monitoring-windows-firewall-nsclient-api"></a>
2330 If the [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2331 plugin is used to query NSClient++, you need to ensure that its port is enabled.
2334 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 8443 (NSClient++ API)" dir=in action=allow protocol=TCP localport=8443
2337 For security reasons, it is advised to enable the NSClient++ HTTP API for local
2338 connection from the Icinga 2 client only. Remote connections to the HTTP API
2339 are not recommended with using the legacy HTTP API.
2341 ### Windows Client and Plugins <a id="distributed-monitoring-windows-plugins"></a>
2343 The Icinga 2 package on Windows already provides several plugins.
2344 Detailed [documentation](10-icinga-template-library.md#windows-plugins) is available for all check command definitions.
2346 Add the following `include` statement on all your nodes (master, satellite, client):
2349 vim /etc/icinga2/icinga2.conf
2351 include <windows-plugins>
2354 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2355 scenario we'll now add a local disk check.
2357 First, add the client node as host object:
2360 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2361 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2363 object Host "icinga2-client2.localdomain" {
2364 check_command = "hostalive"
2365 address = "192.168.56.112"
2366 vars.client_endpoint = name //follows the convention that host name == endpoint name
2367 vars.os_type = "windows"
2371 Next, add the disk check using command endpoint checks (details in the
2372 [disk-windows](10-icinga-template-library.md#windows-plugins-disk-windows) documentation):
2375 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2377 apply Service "disk C:" {
2378 check_command = "disk-windows"
2380 vars.disk_win_path = "C:"
2382 //specify where the check is executed
2383 command_endpoint = host.vars.client_endpoint
2385 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2389 Validate the configuration and restart Icinga 2.
2392 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2393 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2396 Open Icinga Web 2 and check your newly added Windows disk check :)
2398 ![Icinga 2 Client Windows](images/distributed-monitoring/icinga2_distributed_windows_client_disk_icingaweb2.png)
2400 If you want to add your own plugins please check [this chapter](05-service-monitoring.md#service-monitoring-requirements)
2401 for the requirements.
2403 ### Windows Client and NSClient++ <a id="distributed-monitoring-windows-nscp"></a>
2405 There are two methods available for querying NSClient++:
2407 * Query the [HTTP API](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api) locally from an Icinga 2 client (requires a running NSClient++ service)
2408 * Run a [local CLI check](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-local) (does not require NSClient++ as a service)
2410 Both methods have their advantages and disadvantages. One thing to
2411 note: If you rely on performance counter delta calculations such as
2412 CPU utilization, please use the HTTP API instead of the CLI sample call.
2414 #### NSCLient++ with check_nscp_api <a id="distributed-monitoring-windows-nscp-check-api"></a>
2416 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2417 you to install the NSClient++ package. In addition to the Windows plugins you can
2418 use the [nscp_api command](10-icinga-template-library.md#nscp-check-api) provided by the Icinga Template Library (ITL).
2420 The initial setup for the NSClient++ API and the required arguments
2421 is the described in the ITL chapter for the [nscp_api](10-icinga-template-library.md#nscp-check-api) CheckCommand.
2423 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2424 scenario we'll now add a local nscp check which queries the NSClient++ API to check the free disk space.
2426 Define a host object called `icinga2-client2.localdomain` on the master. Add the `nscp_api_password`
2427 custom attribute and specify the drives to check.
2430 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2431 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2433 object Host "icinga2-client1.localdomain" {
2434 check_command = "hostalive"
2435 address = "192.168.56.111"
2436 vars.client_endpoint = name //follows the convention that host name == endpoint name
2437 vars.os_type = "Windows"
2438 vars.nscp_api_password = "icinga"
2439 vars.drives = [ "C:", "D:" ]
2443 The service checks are generated using an [apply for](03-monitoring-basics.md#using-apply-for)
2444 rule based on `host.vars.drives`:
2447 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2449 apply Service "nscp-api-" for (drive in host.vars.drives) {
2450 import "generic-service"
2452 check_command = "nscp_api"
2453 command_endpoint = host.vars.client_endpoint
2455 //display_name = "nscp-drive-" + drive
2457 vars.nscp_api_host = "localhost"
2458 vars.nscp_api_query = "check_drivesize"
2459 vars.nscp_api_password = host.vars.nscp_api_password
2460 vars.nscp_api_arguments = [ "drive=" + drive ]
2462 ignore where host.vars.os_type != "Windows"
2466 Validate the configuration and restart Icinga 2.
2469 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2470 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2473 Two new services ("nscp-drive-D:" and "nscp-drive-C:") will be visible in Icinga Web 2.
2475 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-api](images/distributed-monitoring/icinga2_distributed_windows_nscp_api_drivesize_icingaweb2.png)
2477 Note: You can also omit the `command_endpoint` configuration to execute
2478 the command on the master. This also requires a different value for `nscp_api_host`
2479 which defaults to `host.address`.
2482 //command_endpoint = host.vars.client_endpoint
2484 //vars.nscp_api_host = "localhost"
2487 You can verify the check execution by looking at the `Check Source` attribute
2488 in Icinga Web 2 or the REST API.
2490 If you want to monitor specific Windows services, you could use the following example:
2493 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2494 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2496 object Host "icinga2-client1.localdomain" {
2497 check_command = "hostalive"
2498 address = "192.168.56.111"
2499 vars.client_endpoint = name //follows the convention that host name == endpoint name
2500 vars.os_type = "Windows"
2501 vars.nscp_api_password = "icinga"
2502 vars.services = [ "Windows Update", "wscsvc" ]
2505 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2507 apply Service "nscp-api-" for (svc in host.vars.services) {
2508 import "generic-service"
2510 check_command = "nscp_api"
2511 command_endpoint = host.vars.client_endpoint
2513 //display_name = "nscp-service-" + svc
2515 vars.nscp_api_host = "localhost"
2516 vars.nscp_api_query = "check_service"
2517 vars.nscp_api_password = host.vars.nscp_api_password
2518 vars.nscp_api_arguments = [ "service=" + svc ]
2520 ignore where host.vars.os_type != "Windows"
2524 #### NSCLient++ with nscp-local <a id="distributed-monitoring-windows-nscp-check-local"></a>
2526 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2527 you to install the NSClient++ package. In addition to the Windows plugins you can
2528 use the [nscp-local commands](10-icinga-template-library.md#nscp-plugin-check-commands)
2529 provided by the Icinga Template Library (ITL).
2531 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++](images/distributed-monitoring/icinga2_distributed_windows_nscp.png)
2533 Add the following `include` statement on all your nodes (master, satellite, client):
2536 vim /etc/icinga2/icinga2.conf
2541 The CheckCommand definitions will automatically determine the installed path
2542 to the `nscp.exe` binary.
2544 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2545 scenario we'll now add a local nscp check querying a given performance counter.
2547 First, add the client node as host object:
2550 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2551 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2553 object Host "icinga2-client1.localdomain" {
2554 check_command = "hostalive"
2555 address = "192.168.56.111"
2556 vars.client_endpoint = name //follows the convention that host name == endpoint name
2557 vars.os_type = "windows"
2561 Next, add a performance counter check using command endpoint checks (details in the
2562 [nscp-local-counter](10-icinga-template-library.md#nscp-check-local-counter) documentation):
2565 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2567 apply Service "nscp-local-counter-cpu" {
2568 check_command = "nscp-local-counter"
2569 command_endpoint = host.vars.client_endpoint
2571 vars.nscp_counter_name = "\\Processor(_total)\\% Processor Time"
2572 vars.nscp_counter_perfsyntax = "Total Processor Time"
2573 vars.nscp_counter_warning = 1
2574 vars.nscp_counter_critical = 5
2576 vars.nscp_counter_showall = true
2578 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2582 Validate the configuration and restart Icinga 2.
2585 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2586 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2589 Open Icinga Web 2 and check your newly added Windows NSClient++ check :)
2591 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-local](images/distributed-monitoring/icinga2_distributed_windows_nscp_counter_icingaweb2.png)
2595 > In order to measure CPU load, you'll need a running NSClient++ service.
2596 > Therefore it is advised to use a local [nscp-api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
2597 > check against its REST API.
2599 ## Advanced Hints <a id="distributed-monitoring-advanced-hints"></a>
2601 You can find additional hints in this section if you prefer to go your own route
2602 with automating setups (setup, certificates, configuration).
2604 ### Certificate Auto-Renewal <a id="distributed-monitoring-certificate-auto-renewal"></a>
2606 Icinga 2 v2.8+ added the possibility that nodes request certificate updates
2607 on their own. If their expiration date is soon enough, they automatically
2608 renew their already signed certificate by sending a signing request to the
2609 parent node. You'll also see a message in the logs if certificate renewal
2612 ### High-Availability for Icinga 2 Features <a id="distributed-monitoring-high-availability-features"></a>
2614 All nodes in the same zone require that you enable the same features for high-availability (HA).
2616 By default, the following features provide advanced HA functionality:
2618 * [Checks](06-distributed-monitoring.md#distributed-monitoring-high-availability-checks) (load balanced, automated failover).
2619 * [Notifications](06-distributed-monitoring.md#distributed-monitoring-high-availability-notifications) (load balanced, automated failover).
2620 * [DB IDO](06-distributed-monitoring.md#distributed-monitoring-high-availability-db-ido) (Run-Once, automated failover).
2621 * [Elasticsearch](09-object-types.md#objecttype-elasticsearchwriter)
2622 * [Gelf](09-object-types.md#objecttype-gelfwriter)
2623 * [Graphite](09-object-types.md#objecttype-graphitewriter)
2624 * [InfluxDB](09-object-types.md#objecttype-influxdbwriter)
2625 * [OpenTsdb](09-object-types.md#objecttype-opentsdbwriter)
2626 * [Perfdata](09-object-types.md#objecttype-perfdatawriter) (for PNP)
2628 #### High-Availability with Checks <a id="distributed-monitoring-high-availability-checks"></a>
2630 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2631 have the `checker` feature enabled.
2636 # icinga2 feature enable checker
2639 All nodes in the same zone load-balance the check execution. If one instance shuts down,
2640 the other nodes will automatically take over the remaining checks.
2642 #### High-Availability with Notifications <a id="distributed-monitoring-high-availability-notifications"></a>
2644 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2645 have the `notification` feature enabled.
2650 # icinga2 feature enable notification
2653 Notifications are load-balanced amongst all nodes in a zone. By default this functionality
2655 If your nodes should send out notifications independently from any other nodes (this will cause
2656 duplicated notifications if not properly handled!), you can set `enable_ha = false`
2657 in the [NotificationComponent](09-object-types.md#objecttype-notificationcomponent) feature.
2659 #### High-Availability with DB IDO <a id="distributed-monitoring-high-availability-db-ido"></a>
2661 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2662 have the DB IDO feature enabled.
2664 Example DB IDO MySQL:
2667 # icinga2 feature enable ido-mysql
2670 By default the DB IDO feature only runs on one node. All other nodes in the same zone disable
2671 the active IDO database connection at runtime. The node with the active DB IDO connection is
2672 not necessarily the zone master.
2674 **Note**: The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
2675 for the [IdoMysqlConnection](09-object-types.md#objecttype-idomysqlconnection) or
2676 [IdoPgsqlConnection](09-object-types.md#objecttype-idopgsqlconnection) object on **all** nodes in the
2679 All endpoints will enable the DB IDO feature and connect to the configured
2680 database and dump configuration, status and historical data on their own.
2682 If the instance with the active DB IDO connection dies, the HA functionality will
2683 automatically elect a new DB IDO master.
2685 The DB IDO feature will try to determine which cluster endpoint is currently writing
2686 to the database and bail out if another endpoint is active. You can manually verify that
2687 by running the following query command:
2690 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
2691 status_update_time | endpoint_name
2692 ------------------------+---------------
2693 2016-08-15 15:52:26+02 | icinga2-master1.localdomain
2697 This is useful when the cluster connection between endpoints breaks, and prevents
2698 data duplication in split-brain-scenarios. The failover timeout can be set for the
2699 `failover_timeout` attribute, but not lower than 60 seconds.
2701 ### Endpoint Connection Direction <a id="distributed-monitoring-advanced-hints-connection-direction"></a>
2703 Nodes will attempt to connect to another node when its local [Endpoint](09-object-types.md#objecttype-endpoint) object
2704 configuration specifies a valid `host` attribute (FQDN or IP address).
2706 Example for the master node `icinga2-master1.localdomain` actively connecting
2707 to the client node `icinga2-client1.localdomain`:
2710 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2714 object Endpoint "icinga2-client1.localdomain" {
2715 host = "192.168.56.111" //the master actively tries to connect to the client
2720 Example for the client node `icinga2-client1.localdomain` not actively
2721 connecting to the master node `icinga2-master1.localdomain`:
2724 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2728 object Endpoint "icinga2-master1.localdomain" {
2729 //do not actively connect to the master by leaving out the 'host' attribute
2734 It is not necessary that both the master and the client node establish
2735 two connections to each other. Icinga 2 will only use one connection
2736 and close the second connection if established.
2738 **Tip**: Choose either to let master/satellite nodes connect to client nodes
2742 ### Disable Log Duration for Command Endpoints <a id="distributed-monitoring-advanced-hints-command-endpoint-log-duration"></a>
2744 The replay log is a built-in mechanism to ensure that nodes in a distributed setup
2745 keep the same history (check results, notifications, etc.) when nodes are temporarily
2746 disconnected and then reconnect.
2748 This functionality is not needed when a master/satellite node is sending check
2749 execution events to a client which is purely configured for [command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2752 The [Endpoint](09-object-types.md#objecttype-endpoint) object attribute `log_duration` can
2753 be lower or set to 0 to fully disable any log replay updates when the
2754 client is not connected.
2756 Configuration on the master node `icinga2-master1.localdomain`:
2759 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2763 object Endpoint "icinga2-client1.localdomain" {
2764 host = "192.168.56.111" //the master actively tries to connect to the client
2768 object Endpoint "icinga2-client2.localdomain" {
2769 host = "192.168.56.112" //the master actively tries to connect to the client
2774 Configuration on the client `icinga2-client1.localdomain`:
2777 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2781 object Endpoint "icinga2-master1.localdomain" {
2782 //do not actively connect to the master by leaving out the 'host' attribute
2786 object Endpoint "icinga2-master2.localdomain" {
2787 //do not actively connect to the master by leaving out the 'host' attribute
2792 ### Initial Sync for new Endpoints in a Zone <a id="distributed-monitoring-advanced-hints-initial-sync"></a>
2794 In order to make sure that all of your zone endpoints have the same state you need
2795 to pick the authoritative running one and copy the following content:
2797 * State file from `/var/lib/icinga2/icinga2.state`
2798 * Internal config package for runtime created objects (downtimes, comments, hosts, etc.) at `/var/lib/icinga2/api/packages/_api`
2800 If you need already deployed config packages from the Director, or synced cluster zones,
2801 you can also sync the entire `/var/lib/icinga2` directory. This directory should also be
2802 included in your [backup strategy](02-getting-started.md#install-backup).
2806 > Ensure that all endpoints are shut down during this procedure. Once you have
2807 > synced the cached files, proceed with configuring the remaining endpoints
2808 > to let them know about the new master/satellite node (zones.conf).
2810 ### Manual Certificate Creation <a id="distributed-monitoring-advanced-hints-certificates-manual"></a>
2812 #### Create CA on the Master <a id="distributed-monitoring-advanced-hints-certificates-manual-ca"></a>
2814 Choose the host which should store the certificate authority (one of the master nodes).
2816 The first step is the creation of the certificate authority (CA) by running the following command
2820 [root@icinga2-master1.localdomain /root]# icinga2 pki new-ca
2823 #### Create CSR and Certificate <a id="distributed-monitoring-advanced-hints-certificates-manual-create"></a>
2825 Create a certificate signing request (CSR) for the local instance:
2828 [root@icinga2-master1.localdomain /root]# icinga2 pki new-cert --cn icinga2-master1.localdomain \
2829 --key icinga2-master1.localdomain.key \
2830 --csr icinga2-master1.localdomain.csr
2833 Sign the CSR with the previously created CA:
2836 [root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
2839 Repeat the steps for all instances in your setup.
2841 #### Copy Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-copy"></a>
2843 Copy the host's certificate files and the public CA certificate to `/var/lib/icinga2/certs`:
2846 [root@icinga2-master1.localdomain /root]# mkdir -p /var/lib/icinga2/certs
2847 [root@icinga2-master1.localdomain /root]# cp icinga2-master1.localdomain.{crt,key} /var/lib/icinga2/certs
2848 [root@icinga2-master1.localdomain /root]# cp /var/lib/icinga2/ca/ca.crt /var/lib/icinga2/certs
2851 Ensure that proper permissions are set (replace `icinga` with the Icinga 2 daemon user):
2854 [root@icinga2-master1.localdomain /root]# chown -R icinga:icinga /var/lib/icinga2/certs
2855 [root@icinga2-master1.localdomain /root]# chmod 600 /var/lib/icinga2/certs/*.key
2856 [root@icinga2-master1.localdomain /root]# chmod 644 /var/lib/icinga2/certs/*.crt
2859 The CA public and private key are stored in the `/var/lib/icinga2/ca` directory. Keep this path secure and include
2862 #### Create Multiple Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-multiple"></a>
2864 Use your preferred method to automate the certificate generation process.
2867 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do icinga2 pki new-cert --cn $node --csr $node.csr --key $node.key; done
2868 information/base: Writing private key to 'icinga2-master1.localdomain.key'.
2869 information/base: Writing certificate signing request to 'icinga2-master1.localdomain.csr'.
2870 information/base: Writing private key to 'icinga2-master2.localdomain.key'.
2871 information/base: Writing certificate signing request to 'icinga2-master2.localdomain.csr'.
2872 information/base: Writing private key to 'icinga2-satellite1.localdomain.key'.
2873 information/base: Writing certificate signing request to 'icinga2-satellite1.localdomain.csr'.
2875 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do sudo icinga2 pki sign-csr --csr $node.csr --cert $node.crt; done
2876 information/pki: Writing certificate to file 'icinga2-master1.localdomain.crt'.
2877 information/pki: Writing certificate to file 'icinga2-master2.localdomain.crt'.
2878 information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt'.
2881 Copy and move these certificates to the respective instances e.g. with SSH/SCP.
2883 ## Automation <a id="distributed-monitoring-automation"></a>
2885 These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)
2886 or custom scripts for automated setup.
2888 These are collected best practices from various community channels.
2890 * [Silent Windows setup](06-distributed-monitoring.md#distributed-monitoring-automation-windows-silent)
2891 * [Node Setup CLI command](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup) with parameters
2893 If you prefer an alternate method, we still recommend leaving all the Icinga 2 features intact (e.g. `icinga2 feature enable api`).
2894 You should also use well known and documented default configuration file locations (e.g. `zones.conf`).
2895 This will tremendously help when someone is trying to help in the [community channels](https://icinga.com/community/).
2898 ### Silent Windows Setup <a id="distributed-monitoring-automation-windows-silent"></a>
2900 If you want to install the client silently/unattended, use the `/qn` modifier. The
2901 installation should not trigger a restart, but if you want to be completely sure, you can use the `/norestart` modifier.
2904 C:> msiexec /i C:\Icinga2-v2.5.0-x86.msi /qn /norestart
2907 Once the setup is completed you can use the `node setup` cli command too.
2909 ### Node Setup using CLI Parameters <a id="distributed-monitoring-automation-cli-node-setup"></a>
2911 Instead of using the `node wizard` CLI command, there is an alternative `node setup`
2912 command available which has some prerequisites.
2914 **Note**: The CLI command can be used on Linux/Unix and Windows operating systems.
2915 The graphical Windows setup wizard actively uses these CLI commands.
2917 #### Node Setup on the Master Node <a id="distributed-monitoring-automation-cli-node-setup-master"></a>
2919 In case you want to setup a master node you must add the `--master` parameter
2920 to the `node setup` CLI command. In addition to that the `--cn` can optionally
2921 be passed (defaults to the FQDN).
2923 Parameter | Description
2924 --------------------|--------------------
2925 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN. Defaults to the FQDN.
2926 Zone name | **Optional.** Specified with the `--zone` parameter. Defaults to `master`.
2927 Listen on | **Optional.** Specified with the `--listen` parameter. Syntax is `host,port`.
2928 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive and adds the `api-users.conf` file inclusion to `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
2933 [root@icinga2-master1.localdomain /]# icinga2 node setup --master
2936 In case you want to bind the `ApiListener` object to a specific
2937 host/port you can specify it like this:
2940 --listen 192.68.56.101,5665
2943 In case you don't need anything in `conf.d`, use the following command line:
2946 [root@icinga2-master1.localdomain /]# icinga2 node setup --master --disable-confd
2950 #### Node Setup with Satellites/Clients <a id="distributed-monitoring-automation-cli-node-setup-satellite-client"></a>
2952 Make sure that the `/var/lib/icinga2/certs` directory exists and is owned by the `icinga`
2953 user (or the user Icinga 2 is running as).
2956 [root@icinga2-client1.localdomain /]# mkdir -p /var/lib/icinga2/certs
2957 [root@icinga2-client1.localdomain /]# chown -R icinga:icinga /var/lib/icinga2/certs
2960 First you'll need to generate a new local self-signed certificate.
2961 Pass the following details to the `pki new-cert` CLI command:
2963 Parameter | Description
2964 --------------------|--------------------
2965 Common name (CN) | **Required.** By convention this should be the host's FQDN.
2966 Client certificate files | **Required.** These generated files will be put into the specified location (--key and --file). By convention this should be using `/var/lib/icinga2/certs` as directory.
2971 [root@icinga2-client1.localdomain /]# icinga2 pki new-cert --cn icinga2-client1.localdomain \
2972 --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2973 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt
2976 Request the master certificate from the master host (`icinga2-master1.localdomain`)
2977 and store it as `trusted-master.crt`. Review it and continue.
2979 Pass the following details to the `pki save-cert` CLI command:
2981 Parameter | Description
2982 --------------------|--------------------
2983 Client certificate files | **Required.** Pass the previously generated files using the `--key` and `--cert` parameters.
2984 Trusted parent certificate | **Required.** Store the parent's certificate file. Manually verify that you're trusting it.
2985 Parent host | **Required.** FQDN or IP address of the parent host.
2990 [root@icinga2-client1.localdomain /]# icinga2 pki save-cert --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2991 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt \
2992 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
2993 --host icinga2-master1.localdomain
2996 Continue with the additional node setup step. Specify a local endpoint and zone name (`icinga2-client1.localdomain`)
2997 and set the master host (`icinga2-master1.localdomain`) as parent zone configuration. Specify the path to
2998 the previously stored trusted master certificate.
3000 Pass the following details to the `node setup` CLI command:
3002 Parameter | Description
3003 --------------------|--------------------
3004 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN.
3005 Request ticket | **Required.** Add the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing).
3006 Trusted master certificate | **Required.** Add the previously fetched trusted master certificate (this step means that you've verified its origin).
3007 Parent host | **Optional.** FQDN or IP address of the parent host. This is where the command connects for CSR signing. If not specified, you need to manually copy the parent's public CA certificate file into `/var/lib/icinga2/certs/ca.crt` in order to start Icinga 2.
3008 Parent endpoint | **Required.** Specify the parent's endpoint name.
3009 Client zone name | **Required.** Specify the client's zone name.
3010 Parent zone name | **Optional.** Specify the parent's zone name.
3011 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)).
3012 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
3013 Global zones | **Optional.** Allows to specify more global zones in addition to `global-templates` and `director-global`.
3014 Disable conf.d | **Optional.** Specified with the `disable-confd` parameter. If provided, this disables the `include_recursive "conf.d"` directive in `icinga2.conf`. Available since v2.9+. Not set by default for compatibility reasons with Puppet, Ansible, Chef, etc.
3018 > The `master_host` parameter is deprecated and will be removed. Please use `--parent_host` instead.
3023 [root@icinga2-client1.localdomain /]# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
3024 --cn icinga2-client1.localdomain \
3025 --endpoint icinga2-master1.localdomain \
3026 --zone icinga2-client1.localdomain \
3027 --parent_zone master \
3028 --parent_host icinga2-master1.localdomain \
3029 --trustedcert /var/lib/icinga2/certs/trusted-parent.crt \
3030 --accept-commands --accept-config \
3034 In case the client should connect to the master node, you'll
3035 need to modify the `--endpoint` parameter using the format `cn,host,port`:
3038 --endpoint icinga2-master1.localdomain,192.168.56.101,5665
3041 Specify the parent zone using the `--parent_zone` parameter. This is useful
3042 if the client connects to a satellite, not the master instance.
3045 --parent_zone satellite
3048 In case the client should know the additional global zone `linux-templates`, you'll
3049 need to set the `--global-zones` parameter.
3052 --global_zones linux-templates
3055 The `--parent-host` parameter is optional since v2.9 and allows you to perform a connection-less setup.
3056 You cannot restart Icinga 2 yet, the CLI command asked to to manually copy the parent's public CA
3057 certificate file in `/var/lib/icinga2/certs/ca.crt`. Once Icinga 2 is started, it sends
3058 a ticket signing request to the parent node. If you have provided a ticket, the master node
3059 signs the request and sends it back to the client which performs a certificate update in-memory.
3061 In case you did not provide a ticket, you need to manually sign the CSR on the master node
3062 which holds the CA's key pair.
3065 **You can find additional best practices below.**
3067 If this client node is configured as [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
3068 you can safely disable the `checker` feature. The `node setup` CLI command already disabled the `notification` feature.
3071 [root@icinga2-client1.localdomain /]# icinga2 feature disable checker
3074 Disable "conf.d" inclusion if this is a [top down](06-distributed-monitoring.md#distributed-monitoring-top-down)
3078 [root@icinga2-client1.localdomain /]# sed -i 's/include_recursive "conf.d"/\/\/include_recursive "conf.d"/g' /etc/icinga2/icinga2.conf
3081 **Note**: This is the default since v2.9.
3083 **Optional**: Add an ApiUser object configuration for remote troubleshooting.
3086 [root@icinga2-client1.localdomain /]# cat <<EOF >/etc/icinga2/conf.d/api-users.conf
3087 object ApiUser "root" {
3088 password = "clientsupersecretpassword"
3094 In case you've previously disabled the "conf.d" directory only
3095 add the file file `conf.d/api-users.conf`:
3098 [root@icinga2-client1.localdomain /]# echo 'include "conf.d/api-users.conf"' >> /etc/icinga2/icinga2.conf
3101 Finally restart Icinga 2.
3104 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
3107 Your automation tool must then configure master node in the meantime.
3110 # cat <<EOF >>/etc/icinga2/zones.conf
3111 object Endpoint "icinga2-client1.localdomain" {
3112 //client connects itself
3115 object Zone "icinga2-client1.localdomain" {
3116 endpoints = [ "icinga2-client1.localdomain" ]
3123 ## Using Multiple Environments <a id="distributed-monitoring-environments"></a>
3125 In some cases it can be desired to run multiple Icinga instances on the same host.
3126 Two potential scenarios include:
3128 * Different versions of the same monitoring configuration (e.g. production and testing)
3129 * Disparate sets of checks for entirely unrelated monitoring environments (e.g. infrastructure and applications)
3131 The configuration is done with the global constants `ApiBindHost` and `ApiBindPort`
3132 or the `bind_host` and `bind_port` attributes of the
3133 [ApiListener](09-object-types.md#objecttype-apilistener) object.
3135 The environment must be set with the global constant `Environment` or as object attribute
3136 of the [IcingaApplication](09-object-types.md#objecttype-icingaapplication) object.
3138 In any case the constant is default value for the attribute and the direct configuration in the objects
3139 have more precedence. The constants have been added to allow the values being set from the CLI on startup.
3141 When Icinga establishes a TLS connection to another cluster instance it automatically uses the [SNI extension](https://en.wikipedia.org/wiki/Server_Name_Indication)
3142 to signal which endpoint it is attempting to connect to. On its own this can already be used to position multiple
3143 Icinga instances behind a load balancer.
3145 SNI example: `icinga2-client1.localdomain`
3147 However, if the environment is configured to `production`, Icinga appends the environment name to the SNI hostname like this:
3149 SNI example with environment: `icinga2-client1.localdomain:production`
3151 Middleware like loadbalancers or TLS proxies can read the SNI header and route the connection to the appropriate target.
3152 I.e., it uses a single externally-visible TCP port (usually 5665) and forwards connections to one or more Icinga
3153 instances which are bound to a local TCP port. It does so by inspecting the environment name that is sent as part of the