1 # Distributed Monitoring with Master, Satellites, and Clients <a id="distributed-monitoring"></a>
3 This chapter will guide you through the setup of a distributed monitoring
4 environment, including high-availability clustering and setup details
5 for the Icinga 2 client.
7 ## Roles: Master, Satellites, and Clients <a id="distributed-monitoring-roles"></a>
9 Icinga 2 nodes can be given names for easier understanding:
11 * A `master` node which is on top of the hierarchy.
12 * A `satellite` node which is a child of a `satellite` or `master` node.
13 * A `client` node which works as an `agent` connected to `master` and/or `satellite` nodes.
15 ![Icinga 2 Distributed Roles](images/distributed-monitoring/icinga2_distributed_roles.png)
17 Rephrasing this picture into more details:
19 * A `master` node has no parent node.
20 * A `master`node is where you usually install Icinga Web 2.
21 * A `master` node can combine executed checks from child nodes into backends and notifications.
22 * A `satellite` node has a parent and a child node.
23 * A `satellite` node may execute checks on its own or delegate check execution to child nodes.
24 * A `satellite` node can receive configuration for hosts/services, etc. from the parent node.
25 * A `satellite` node continues to run even if the master node is temporarily unavailable.
26 * A `client` node only has a parent node.
27 * A `client` node will either run its own configured checks or receive command execution events from the parent node.
29 The following sections will refer to these roles and explain the
30 differences and the possibilities this kind of setup offers.
32 **Tip**: If you just want to install a single master node that monitors several hosts
33 (i.e. Icinga 2 clients), continue reading -- we'll start with
35 In case you are planning a huge cluster setup with multiple levels and
36 lots of clients, read on -- we'll deal with these cases later on.
38 The installation on each system is the same: You need to install the
39 [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and the required [plugins](02-getting-started.md#setting-up-check-plugins).
41 The required configuration steps are mostly happening
42 on the command line. You can also [automate the setup](06-distributed-monitoring.md#distributed-monitoring-automation).
44 The first thing you need learn about a distributed setup is the hierarchy of the single components.
46 ## Zones <a id="distributed-monitoring-zones"></a>
48 The Icinga 2 hierarchy consists of so-called [zone](09-object-types.md#objecttype-zone) objects.
49 Zones depend on a parent-child relationship in order to trust each other.
51 ![Icinga 2 Distributed Zones](images/distributed-monitoring/icinga2_distributed_zones.png)
53 Have a look at this example for the `satellite` zones which have the `master` zone as a parent zone:
55 object Zone "master" {
59 object Zone "satellite region 1" {
64 object Zone "satellite region 2" {
69 There are certain limitations for child zones, e.g. their members are not allowed
70 to send configuration commands to the parent zone members. Vice versa, the
71 trust hierarchy allows for example the `master` zone to send
72 configuration files to the `satellite` zone. Read more about this
73 in the [security section](06-distributed-monitoring.md#distributed-monitoring-security).
75 `client` nodes also have their own unique zone. By convention you
76 can use the FQDN for the zone name.
78 ## Endpoints <a id="distributed-monitoring-endpoints"></a>
80 Nodes which are a member of a zone are so-called [Endpoint](09-object-types.md#objecttype-endpoint) objects.
82 ![Icinga 2 Distributed Endpoints](images/distributed-monitoring/icinga2_distributed_endpoints.png)
84 Here is an example configuration for two endpoints in different zones:
86 object Endpoint "icinga2-master1.localdomain" {
87 host = "192.168.56.101"
90 object Endpoint "icinga2-satellite1.localdomain" {
91 host = "192.168.56.105"
94 object Zone "master" {
95 endpoints = [ "icinga2-master1.localdomain" ]
98 object Zone "satellite" {
99 endpoints = [ "icinga2-satellite1.localdomain" ]
103 All endpoints in the same zone work as high-availability setup. For
104 example, if you have two nodes in the `master` zone, they will load-balance the check execution.
106 Endpoint objects are important for specifying the connection
107 information, e.g. if the master should actively try to connect to a client.
109 The zone membership is defined inside the `Zone` object definition using
110 the `endpoints` attribute with an array of `Endpoint` names.
112 If you want to check the availability (e.g. ping checks) of the node
113 you still need a [Host](09-object-types.md#objecttype-host) object.
115 ## ApiListener <a id="distributed-monitoring-apilistener"></a>
117 In case you are using the CLI commands later, you don't have to write
118 this configuration from scratch in a text editor.
119 The [ApiListener](09-object-types.md#objecttype-apilistener)
120 object is used to load the SSL certificates and specify restrictions, e.g.
121 for accepting configuration commands.
123 It is also used for the [Icinga 2 REST API](12-icinga2-api.md#icinga2-api) which shares
124 the same host and port with the Icinga 2 Cluster protocol.
126 The object configuration is stored in the `/etc/icinga2/features-enabled/api.conf`
127 file. Depending on the configuration mode the attributes `accept_commands`
128 and `accept_config` can be configured here.
130 In order to use the `api` feature you need to enable it and restart Icinga 2.
132 icinga2 feature enable api
134 ## Conventions <a id="distributed-monitoring-conventions"></a>
136 By convention all nodes should be configured using their FQDN.
138 Furthermore, you must ensure that the following names
139 are exactly the same in all configuration files:
141 * Host certificate common name (CN).
142 * Endpoint configuration object for the host.
143 * NodeName constant for the local host.
145 Setting this up on the command line will help you to minimize the effort.
146 Just keep in mind that you need to use the FQDN for endpoints and for
147 common names when asked.
149 ## Security <a id="distributed-monitoring-security"></a>
151 While there are certain mechanisms to ensure a secure communication between all
152 nodes (firewalls, policies, software hardening, etc.), Icinga 2 also provides
155 * SSL certificates are mandatory for communication between nodes. The CLI commands
156 help you create those certificates.
157 * Child zones only receive updates (check results, commands, etc.) for their configured objects.
158 * Child zones are not allowed to push configuration updates to parent zones.
159 * Zones cannot interfere with other zones and influence each other. Each checkable host or service object is assigned to **one zone** only.
160 * All nodes in a zone trust each other.
161 * [Config sync](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync) and [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) is disabled by default.
163 The underlying protocol uses JSON-RPC event notifications exchanged by nodes.
164 The connection is secured by TLS. The message protocol uses an internal API,
165 and as such message types and names may change internally and are not documented.
167 Zones build the trust relationship in a distributed environment. If you do not specify
168 a zone for a client and specify the parent zone, its zone members e.g. the master instance
169 won't trust the client.
171 Building this trust is key in your distributed environment. That way the parent node
172 knows that it is able to send messages to the child zone, e.g. configuration objects,
173 configuration in global zones, commands to be executed in this zone/for this endpoint.
174 It also receives check results from the child zone for checkable objects (host/service).
176 Vice versa, the client trusts the master and accepts configuration and commands if enabled
177 in the api feature. If the client would send configuration to the parent zone, the parent nodes
178 will deny it. The parent zone is the configuration entity, and does not trust clients in this matter.
179 A client could attempt to modify a different client for example, or inject a check command
182 While it may sound complicated for client setups, it removes the problem with different roles
183 and configurations for a master and a client. Both of them work the same way, are configured
184 in the same way (Zone, Endpoint, ApiListener), and you can troubleshoot and debug them in just one go.
186 ## Master Setup <a id="distributed-monitoring-setup-master"></a>
188 This section explains how to install a central single master node using
189 the `node wizard` command. If you prefer to do an automated installation, please
190 refer to the [automated setup](06-distributed-monitoring.md#distributed-monitoring-automation) section.
192 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
193 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
196 **Note**: Windows is not supported for a master node setup.
198 The next step is to run the `node wizard` CLI command. Prior to that
199 ensure to collect the required information:
201 Parameter | Description
202 --------------------|--------------------
203 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
204 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
205 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
207 The setup wizard will ensure that the following steps are taken:
209 * Enable the `api` feature.
210 * Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if it doesn't exist.
211 * Create a certificate for this node signed by the CA key.
212 * Update the [zones.conf](04-configuring-icinga-2.md#zones-conf) file with the new zone hierarchy.
213 * Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuring-icinga-2.md#constants-conf) configuration.
215 Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
218 [root@icinga2-master1.localdomain /]# icinga2 node wizard
220 Welcome to the Icinga 2 Setup Wizard!
222 We will guide you through all required configuration details.
224 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]: n
226 Starting the Master setup routine...
228 Please specify the common name (CN) [icinga2-master1.localdomain]: icinga2-master1.localdomain
229 Reconfiguring Icinga...
230 Checking for existing certificates for common name 'master1'...
231 Generating master configuration for Icinga 2.
233 Please specify the API bind host/port (optional):
239 Now restart your Icinga 2 daemon to finish the installation!
242 You can verify that the CA public and private keys are stored in the `/var/lib/icinga2/ca` directory.
243 Keep this path secure and include it in your [backups](02-getting-started.md#install-backup).
245 In case you lose the CA private key you have to generate a new CA for signing new client
246 certificate requests. You then have to also re-create new signed certificates for all
249 Once the master setup is complete, you can also use this node as primary [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing)
250 master. The following section will explain how to use the CLI commands in order to fetch their
251 signed certificate from this master node.
253 ## Signing Certificates on the Master <a id="distributed-monitoring-setup-sign-certificates-master"></a>
255 All certificates must be signed by the same certificate authority (CA). This ensures
256 that all nodes trust each other in a distributed monitoring environment.
258 This CA is generated during the [master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master)
259 and should be the same on all master instances.
261 You can avoid signing and deploying certificates [manually](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
262 by using built-in methods for auto-signing certificate signing requests (CSR):
264 * [CSR Auto-Signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) which uses a client ticket generated on the master as trust identifier.
265 * [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) which allows to sign pending certificate requests on the master.
267 Both methods are described in detail below.
271 > [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing) is available in Icinga 2 v2.8+.
273 ### CSR Auto-Signing <a id="distributed-monitoring-setup-csr-auto-signing"></a>
275 A client which sends a certificate signing request (CSR) must authenticate itself
276 in a trusted way. The master generates a client ticket which is included in this request.
277 That way the master can verify that the request matches the previously trusted ticket
278 and sign the request.
282 > Icinga 2 v2.8 adds the possibility to forward signing requests on a satellite
283 > to the master node. This helps with the setup of [three level clusters](#06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
288 * Nodes can be installed by different users who have received the client ticket.
289 * No manual interaction necessary on the master node.
290 * Automation tools like Puppet, Ansible, etc. can retrieve the pre-generated ticket in their client catalog
291 and run the node setup directly.
295 * Tickets need to be generated on the master and copied to client setup wizards.
296 * No central signing management.
299 Setup wizards for satellite/client nodes will ask you for this specific client ticket.
301 There are two possible ways to retrieve the ticket:
303 * [CLI command](11-cli-commands.md#cli-command-pki) executed on the master node.
304 * [REST API](12-icinga2-api.md#icinga2-api) request against the master node.
306 Required information:
308 Parameter | Description
309 --------------------|--------------------
310 Common name (CN) | **Required.** The common name for the satellite/client. By convention this should be the FQDN.
312 The following example shows how to generate a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
314 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
316 Querying the [Icinga 2 API](12-icinga2-api.md#icinga2-api) on the master requires an [ApiUser](12-icinga2-api.md#icinga2-api-authentication)
317 object with at least the `actions/generate-ticket` permission.
319 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/conf.d/api-users.conf
321 object ApiUser "client-pki-ticket" {
322 password = "bea11beb7b810ea9ce6ea" //change this
323 permissions = [ "actions/generate-ticket" ]
326 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
328 Retrieve the ticket on the master node `icinga2-master1.localdomain` with `curl`, for example:
330 [root@icinga2-master1.localdomain /]# curl -k -s -u client-pki-ticket:bea11beb7b810ea9ce6ea -H 'Accept: application/json' \
331 -X POST 'https://icinga2-master1.localdomain:5665/v1/actions/generate-ticket' -d '{ "cn": "icinga2-client1.localdomain" }'
333 Store that ticket number for the satellite/client setup below.
335 **Note**: Never expose the ticket salt and/or ApiUser credentials to your client nodes.
336 Example: Retrieve the ticket on the Puppet master node and send the compiled catalog
337 to the authorized Puppet agent node which will invoke the
338 [automated setup steps](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup).
340 ### On-Demand CSR Signing <a id="distributed-monitoring-setup-on-demand-csr-signing"></a>
342 Icinga 2 v2.8 adds the possibility to sign certificates from clients without
343 requiring a client ticket for auto-signing.
345 Instead, the client sends a certificate signing request to specified parent node.
346 This could either be directly the master, or a satellite which forwards the request
347 to the signing master.
351 * Central certificate request signing management.
352 * No pre-generated ticket is required for client setups.
356 * Asynchronous step for automated deployments.
357 * Needs client verification on the master.
360 You can list certificate requests by using the `ca list` CLI command. This also shows
361 which requests already have been signed.
364 [root@icinga2-master1.localdomain /]# icinga2 ca list
365 Fingerprint | Timestamp | Signed | Subject
366 -----------------------------------------------------------------|---------------------|--------|--------
367 403da5b228df384f07f980f45ba50202529cded7c8182abf96740660caa09727 | 2017/09/06 17:02:40 | * | CN = icinga2-client1.localdomain
368 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850 | 2017/09/06 17:20:02 | | CN = icinga2-client2.localdomain
371 **Tip**: Add `--json` to the CLI command to retrieve the details in JSON format.
373 If you want to sign a specific request, you need to use the `ca sign` CLI command
374 and pass its fingerprint as argument.
377 [root@icinga2-master1.localdomain /]# icinga2 ca sign 71700c28445109416dd7102038962ac3fd421fbb349a6e7303b6033ec1772850
378 information/cli: Signed certificate for 'CN = icinga2-client2.localdomain'.
381 ## Client/Satellite Setup <a id="distributed-monitoring-setup-satellite-client"></a>
383 This section describes the setup of a satellite and/or client connected to an
384 existing master node setup. If you haven't done so already, please [run the master setup](06-distributed-monitoring.md#distributed-monitoring-setup-master).
386 Icinga 2 on the master node must be running and accepting connections on port `5665`.
389 ### Client/Satellite Setup on Linux <a id="distributed-monitoring-setup-client-linux"></a>
391 Please ensure that you've run all the steps mentioned in the [client/satellite section](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
393 Install the [Icinga 2 package](02-getting-started.md#setting-up-icinga2) and setup
394 the required [plugins](02-getting-started.md#setting-up-check-plugins) if you haven't done
397 The next step is to run the `node wizard` CLI command.
399 In this example we're generating a ticket on the master node `icinga2-master1.localdomain` for the client `icinga2-client1.localdomain`:
401 [root@icinga2-master1.localdomain /]# icinga2 pki ticket --cn icinga2-client1.localdomain
402 4f75d2ecd253575fe9180938ebff7cbca262f96e
404 Note: You don't need this step if you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
406 Start the wizard on the client `icinga2-client1.localdomain`:
409 [root@icinga2-client1.localdomain /]# icinga2 node wizard
411 Welcome to the Icinga 2 Setup Wizard!
413 We will guide you through all required configuration details.
416 Press `Enter` or add `y` to start a satellite or client setup.
419 Please specify if this is a satellite/client setup ('n' installs a master setup) [Y/n]:
422 Press `Enter` to use the proposed name in brackets, or add a specific common name (CN). By convention
423 this should be the FQDN.
426 Starting the Client/Satellite setup routine...
428 Please specify the common name (CN) [icinga2-client1.localdomain]: icinga2-client1.localdomain
431 Specify the direct parent for this node. This could be your primary master `icinga2-master1.localdomain`
432 or a satellite node in a multi level cluster scenario.
435 Please specify the parent endpoint(s) (master or satellite) where this node should connect to:
436 Master/Satellite Common Name (CN from your master/satellite node): icinga2-master1.localdomain
439 Press `Enter` or choose `y` to establish a connection to the parent node.
442 Do you want to establish a connection to the parent node from this node? [Y/n]:
447 > If this node cannot connect to the parent node, choose `n`. The setup
448 > wizard will provide instructions for this scenario -- signing questions are disabled then.
450 Add the connection details for `icinga2-master1.localdomain`.
453 Please specify the master/satellite connection information:
454 Master/Satellite endpoint host (IP address or FQDN): 192.168.56.101
455 Master/Satellite endpoint port [5665]: 5665
458 You can add more parent nodes if necessary. Press `Enter` or choose `n`
459 if you don't want to add any. This comes in handy if you have more than one
460 parent node, e.g. two masters or two satellites.
463 Add more master/satellite endpoints? [y/N]:
466 Verify the parent node's certificate:
469 Parent certificate information:
471 Subject: CN = icinga2-master1.localdomain
472 Issuer: CN = Icinga CA
473 Valid From: Sep 7 13:41:24 2017 GMT
474 Valid Until: Sep 3 13:41:24 2032 GMT
475 Fingerprint: AC 99 8B 2B 3D B0 01 00 E5 21 FA 05 2E EC D5 A9 EF 9E AA E3
477 Is this information correct? [y/N]: y
480 The setup wizard fetches the parent node's certificate and ask
481 you to verify this information. This is to prevent MITM attacks or
482 any kind of untrusted parent relationship.
484 Note: The certificate is not fetched if you have chosen not to connect
487 Proceed with adding the optional client ticket for [CSR auto-signing](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing):
490 Please specify the request ticket generated on your Icinga 2 master (optional).
491 (Hint: # icinga2 pki ticket --cn 'icinga2-client1.localdomain'):
492 4f75d2ecd253575fe9180938ebff7cbca262f96e
495 In case you've chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
496 you can leave the ticket question blank.
498 Instead, Icinga 2 tells you to approve the request later on the master node.
501 No ticket was specified. Please approve the certificate signing request manually
502 on the master (see 'icinga2 ca list' and 'icinga2 ca sign --help' for details).
505 You can optionally specify a different bind host and/or port.
508 Please specify the API bind host/port (optional):
513 The next step asks you to accept configuration (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync))
514 and commands (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
517 Accept config from parent node? [y/N]: y
518 Accept commands from parent node? [y/N]: y
521 The wizard proceeds and you are good to go.
524 Reconfiguring Icinga...
528 Now restart your Icinga 2 daemon to finish the installation!
533 > If you have chosen not to connect to the parent node, you cannot start
534 > Icinga 2 yet. The wizard asked you to manually copy the master's public
535 > CA certificate file into `/var/lib/icinga2/certs/ca.crt`.
537 > You need to manually sign the CSR on the master node.
539 Restart Icinga 2 as requested.
542 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
545 Here is an overview of all parameters in detail:
547 Parameter | Description
548 --------------------|--------------------
549 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
550 Master common name | **Required.** Use the common name you've specified for your master node before.
551 Establish connection to the parent node | **Optional.** Whether the node should attempt to connect to the parent node or not. Defaults to `y`.
552 Master/Satellite endpoint host | **Required if the the client needs to connect to the master/satellite.** The parent endpoint's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
553 Master/Satellite endpoint port | **Optional if the the client needs to connect to the master/satellite.** The parent endpoints's listening port. This information is included in the `Endpoint` object configuration.
554 Add more master/satellite endpoints | **Optional.** If you have multiple master/satellite nodes configured, add them here.
555 Parent Certificate information | **Required.** Verify that the connecting host really is the requested master node.
556 Request ticket | **Optional.** Add the [ticket](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing) generated on the master.
557 API bind host | **Optional.** Allows to specify the address the ApiListener is bound to. For advanced usage only.
558 API bind port | **Optional.** Allows to specify the port the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
559 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
560 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this defaults to `n`.
562 The setup wizard will ensure that the following steps are taken:
564 * Enable the `api` feature.
565 * Create a certificate signing request (CSR) for the local node.
566 * Request a signed certificate i(optional with the provided ticket number) on the master node.
567 * Allow to verify the parent node's certificate.
568 * Store the signed client certificate and ca.crt in `/var/lib/icinga2/certs`.
569 * Update the `zones.conf` file with the new zone hierarchy.
570 * Update `/etc/icinga2/features-enabled/api.conf` (`accept_config`, `accept_commands`) and `constants.conf`.
573 You can verify that the certificate files are stored in the `/var/lib/icinga2/certs` directory.
577 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
582 > If the client is not directly connected to the certificate signing master,
583 > signing requests and responses might need some minutes to fully update the client certificates.
585 > If you have chosen to use [On-Demand CSR Signing](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing)
586 > certificates need to be signed on the master first. Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
588 Now that you've successfully installed a Linux/Unix satellite/client instance, please proceed to
589 the [configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
593 ### Client Setup on Windows <a id="distributed-monitoring-setup-client-windows"></a>
595 Download the MSI-Installer package from [https://packages.icinga.com/windows/](https://packages.icinga.com/windows/).
599 * Windows Vista/Server 2008 or higher
600 * Versions older than Windows 10/Server 2016 require the [Universal C Runtime for Windows](https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows)
601 * [Microsoft .NET Framework 2.0](https://www.microsoft.com/de-de/download/details.aspx?id=1639) for the setup wizard
603 The installer package includes the [NSClient++](https://www.nsclient.org/) package
604 so that Icinga 2 can use its built-in plugins. You can find more details in
605 [this chapter](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
606 The Windows package also installs native [monitoring plugin binaries](06-distributed-monitoring.md#distributed-monitoring-windows-plugins)
607 to get you started more easily.
611 > Please note that Icinga 2 was designed to run as light-weight client on Windows.
612 > There is no support for satellite instances.
614 #### Windows Client Setup Start <a id="distributed-monitoring-setup-client-windows-start"></a>
616 Run the MSI-Installer package and follow the instructions shown in the screenshots.
618 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_01.png)
619 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_02.png)
620 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_03.png)
621 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_04.png)
622 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_installer_05.png)
624 The graphical installer offers to run the Icinga 2 setup wizard after the installation. Select
625 the check box to proceed.
629 > You can also run the Icinga 2 setup wizard from the Start menu later.
631 On a fresh installation the setup wizard guides you through the initial configuration.
632 It also provides a mechanism to send a certificate request to the [CSR signing master](distributed-monitoring-setup-sign-certificates-master).
634 The following configuration details are required:
636 Parameter | Description
637 --------------------|--------------------
638 Instance name | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
639 Setup ticket | **Optional.** Paste the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing). If left blank, the certificate request must be [signed on the master node](06-distributed-monitoring.md#distributed-monitoring-setup-on-demand-csr-signing).
641 Fill in the required information and click `Add` to add a new master connection.
643 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_01.png)
645 Add the following details:
647 Parameter | Description
648 -------------------------------|-------------------------------
649 Instance name | **Required.** The master/satellite endpoint name where this client is a direct child of.
650 Master/Satellite endpoint host | **Required.** The master or satellite's IP address or FQDN. This information is included in the `Endpoint` object configuration in the `zones.conf` file.
651 Master/Satellite endpoint port | **Optional.** The master or satellite's listening port. This information is included in the `Endpoint` object configuration.
653 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_02.png)
655 Optionally enable the following settings:
657 Parameter | Description
658 ----------------------------------|----------------------------------
659 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
660 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)). For [security reasons](06-distributed-monitoring.md#distributed-monitoring-security) this is disabled by default.
661 Run Icinga 2 service as this user | **Optional.** Specify a different Windows user. This defaults to `NT AUTHORITY\Network Service` and is required for more privileged service checks.
662 Install NSClient++ | **Optional.** The Windows installer bundles the NSClient++ installer for additional [plugin checks](06-distributed-monitoring.md#distributed-monitoring-windows-nscp).
664 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_03.png)
666 Verify the certificate from the master/satellite instance where this node should connect to.
668 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_04.png)
671 #### Bundled NSClient++ Setup <a id="distributed-monitoring-setup-client-windows-nsclient"></a>
673 If you have chosen to install/update the NSClient++ package, the Icinga 2 setup wizard asks
676 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_01.png)
678 Choose the `Generic` setup.
680 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_02.png)
682 Choose the `Custom` setup type.
684 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_03.png)
686 NSClient++ does not install a sample configuration by default. Change this as shown in the screenshot.
688 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_04.png)
690 Generate a secure password and enable the web server module. **Note**: The webserver module is
691 available starting with NSClient++ 0.5.0. Icinga 2 v2.6+ is required which includes this version.
693 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_05.png)
695 Finish the installation.
697 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_06.png)
699 Open a web browser and navigate to `https://localhost:8443`. Enter the password you've configured
700 during the setup. In case you lost it, look into the `C:\Program Files\NSClient++\nsclient.ini`
703 ![Icinga 2 Windows Setup NSClient++](images/distributed-monitoring/icinga2_windows_setup_wizard_05_nsclient_07.png)
705 The NSClient++ REST API can be used to query metrics. [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
706 uses this transport method.
709 #### Finish Windows Client Setup <a id="distributed-monitoring-setup-client-windows-finish"></a>
711 Finish the Windows setup wizard.
713 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_with_ticket.png)
715 If you did not provide a setup ticket, you need to sign the certificate request on the master.
716 The setup wizards tells you to do so. The Icinga 2 service is running at this point already
717 and will automatically receive and update a signed client certificate.
721 > Ticket-less setups require at least Icinga 2 v2.8+ on all involved instances.
724 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_06_finish_no_ticket.png)
726 Icinga 2 is automatically started as a Windows service.
728 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_running_service.png)
730 The Icinga 2 configuration is stored inside the `C:\ProgramData\icinga2` directory.
731 Click `Examine Config` in the setup wizard to open a new Explorer window.
733 ![Icinga 2 Windows Setup](images/distributed-monitoring/icinga2_windows_setup_wizard_examine_config.png)
735 The configuration files can be modified with your favorite editor e.g. Notepad.
737 In order to use the [top down](06-distributed-monitoring.md#distributed-monitoring-top-down) client
738 configuration prepare the following steps.
740 Add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
741 for syncing check commands later. Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
742 the `zones.conf` file in your preferred editor. Add the following lines if not existing already:
744 object Zone "global-templates" {
748 Note: Packages >= 2.8 provide this configuration by default.
750 You don't need any local configuration on the client except for
751 CheckCommand definitions which can be synced using the global zone
752 above. Therefore disable the inclusion of the `conf.d` directory
753 in the `icinga2.conf` file.
754 Navigate to `C:\ProgramData\icinga2\etc\icinga2` and open
755 the `icinga2.conf` file in your preferred editor. Remove or comment (`//`)
758 // Commented out, not required on a client with top down mode
759 //include_recursive "conf.d"
761 Validate the configuration on Windows open an administrator terminal
762 and run the following command:
764 C:\WINDOWS\system32>cd "C:\Program Files\ICINGA2\sbin"
765 C:\Program Files\ICINGA2\sbin>icinga2.exe daemon -C
767 **Note**: You have to run this command in a shell with `administrator` privileges.
769 Now you need to restart the Icinga 2 service. Run `services.msc` from the start menu
770 and restart the `icinga2` service. Alternatively, you can use the `net {start,stop}` CLI commands.
772 ![Icinga 2 Windows Service Start/Stop](images/distributed-monitoring/icinga2_windows_cmd_admin_net_start_stop.png)
774 Now that you've successfully installed a Windows client, please proceed to
775 the [detailed configuration modes](06-distributed-monitoring.md#distributed-monitoring-configuration-modes).
779 > The certificate location changed in v2.8 to `%ProgramData%\var\lib\icinga2\certs`.
780 > Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
783 ## Configuration Modes <a id="distributed-monitoring-configuration-modes"></a>
785 There are different ways to ensure that the Icinga 2 cluster nodes execute
786 checks, send notifications, etc.
788 The preferred method is to configure monitoring objects on the master
789 and distribute the configuration to satellites and clients.
791 The following chapters will explain this in detail with hands-on manual configuration
792 examples. You should test and implement this once to fully understand how it works.
794 Once you are familiar with Icinga 2 and distributed monitoring, you
795 can start with additional integrations to manage and deploy your
798 * [Icinga Director](https://github.com/icinga/icingaweb2-module-director) provides a web interface to manage configuration and also allows to sync imported resources (CMDB, PuppetDB, etc.)
799 * [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
800 * [Puppet Module](https://github.com/Icinga/puppet-icinga2)
801 * [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
803 More details can be found [here](13-addons.md#configuration-tools).
805 ### Top Down <a id="distributed-monitoring-top-down"></a>
807 There are two different behaviors with check execution:
809 * Send a command execution event remotely: The scheduler still runs on the parent node.
810 * Sync the host/service objects directly to the child node: Checks are executed locally.
812 Again, technically it does not matter whether this is a `client` or a `satellite`
813 which is receiving configuration or command execution events.
815 ### Top Down Command Endpoint <a id="distributed-monitoring-top-down-command-endpoint"></a>
817 This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
818 The host/service object configuration is located on the master/satellite and the client only
819 needs the CheckCommand object definitions being used there.
821 ![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
825 * No local checks need to be defined on the child node (client).
826 * Light-weight remote check execution (asynchronous events).
827 * No [replay log](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-command-endpoint-log-duration) is necessary for the child node.
828 * Pin checks to specific endpoints (if the child zone consists of 2 endpoints).
832 * If the child node is not connected, no more checks are executed.
833 * Requires additional configuration attribute specified in host/service objects.
834 * Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
836 To make sure that all nodes involved will accept configuration and/or
837 commands, you need to configure the `Zone` and `Endpoint` hierarchy
840 * `icinga2-master1.localdomain` is the configuration master in this scenario.
841 * `icinga2-client1.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition, it receives the global check command configuration from the master.
843 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
845 The endpoint configuration could look like this, for example:
847 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
849 object Endpoint "icinga2-master1.localdomain" {
850 host = "192.168.56.101"
853 object Endpoint "icinga2-client1.localdomain" {
854 host = "192.168.56.111"
857 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
859 **Note**: Each client requires its own zone and endpoint configuration. Best practice
860 is to use the client's FQDN for all object names.
862 The `master` zone is a parent of the `icinga2-client1.localdomain` zone:
864 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
866 object Zone "master" {
867 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
870 object Zone "icinga2-client1.localdomain" {
871 endpoints = [ "icinga2-client1.localdomain" ]
873 parent = "master" //establish zone hierarchy
876 In addition, add a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
877 for syncing check commands later:
879 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
881 object Zone "global-templates" {
885 Note: Packages >= 2.8 provide this configuration by default.
887 You don't need any local configuration on the client except for
888 CheckCommand definitions which can be synced using the global zone
889 above. Therefore disable the inclusion of the `conf.d` directory
890 in `/etc/icinga2/icinga2.conf`.
892 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/icinga2.conf
894 // Commented out, not required on a client as command endpoint
895 //include_recursive "conf.d"
897 Edit the `api` feature on the client `icinga2-client1.localdomain` in
898 the `/etc/icinga2/features-enabled/api.conf` file and make sure to set
899 `accept_commands` and `accept_config` to `true`:
901 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
903 object ApiListener "api" {
905 accept_commands = true
909 Now it is time to validate the configuration and to restart the Icinga 2 daemon
914 [root@icinga2-client1.localdomain /]# icinga2 daemon -C
915 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
917 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
918 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
920 Once the clients have successfully connected, you are ready for the next step: **execute
921 a remote check on the client using the command endpoint**.
923 Include the host and service object configuration in the `master` zone
924 -- this will help adding a secondary master for high-availability later.
926 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
928 Add the host and service objects you want to monitor. There is
929 no limitation for files and directories -- best practice is to
932 By convention a master/satellite/client host object should use the same name as the endpoint object.
933 You can also add multiple hosts which execute checks against remote services/clients.
935 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
936 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
938 object Host "icinga2-client1.localdomain" {
939 check_command = "hostalive" //check is executed on the master
940 address = "192.168.56.111"
942 vars.client_endpoint = name //follows the convention that host name == endpoint name
945 Given that you are monitoring a Linux client, we'll add a remote [disk](10-icinga-template-library.md#plugin-check-command-disk)
948 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
950 apply Service "disk" {
951 check_command = "disk"
953 //specify where the check is executed
954 command_endpoint = host.vars.client_endpoint
956 assign where host.vars.client_endpoint
959 If you have your own custom `CheckCommand` definition, add it to the global zone:
961 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
962 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
964 object CheckCommand "my-cmd" {
968 Save the changes and validate the configuration on the master node:
970 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
972 Restart the Icinga 2 daemon (example for CentOS 7):
974 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
976 The following steps will happen:
978 * Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
979 * The `icinga2-master1.localdomain` node schedules and executes the checks.
980 * The `icinga2-client1.localdomain` node receives the execute command event with additional command parameters.
981 * The `icinga2-client1.localdomain` node maps the command parameters to the local check command, executes the check locally, and sends back the check result message.
983 As you can see, no interaction from your side is required on the client itself, and it's not necessary to reload the Icinga 2 service on the client.
985 You have learned the basics about command endpoint checks. Proceed with
986 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
987 section where you can find detailed information on extending the setup.
990 ### Top Down Config Sync <a id="distributed-monitoring-top-down-config-sync"></a>
992 This mode syncs the object configuration files within specified zones.
993 It comes in handy if you want to configure everything on the master node
994 and sync the satellite checks (disk, memory, etc.). The satellites run their
995 own local scheduler and will send the check result messages back to the master.
997 ![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
1001 * Sync the configuration files from the parent zone to the child zones.
1002 * No manual restart is required on the child nodes, as syncing, validation, and restarts happen automatically.
1003 * Execute checks directly on the child node's scheduler.
1004 * Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
1005 * Use a global zone for syncing templates, groups, etc.
1009 * Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
1010 * Additional zone and endpoint configuration needed.
1011 * Replay log is replicated on reconnect after connection loss. This might increase the data transfer and create an overload on the connection.
1013 To make sure that all involved nodes accept configuration and/or
1014 commands, you need to configure the `Zone` and `Endpoint` hierarchy
1017 * `icinga2-master1.localdomain` is the configuration master in this scenario.
1018 * `icinga2-client2.localdomain` acts as client which receives configuration from the master. Checks are scheduled locally.
1020 Include the endpoint and zone configuration on **both** nodes in the file `/etc/icinga2/zones.conf`.
1022 The endpoint configuration could look like this:
1024 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1026 object Endpoint "icinga2-master1.localdomain" {
1027 host = "192.168.56.101"
1030 object Endpoint "icinga2-client2.localdomain" {
1031 host = "192.168.56.112"
1034 Next, you need to define two zones. There is no naming convention, best practice is to either use `master`, `satellite`/`client-fqdn` or to choose region names for example `Europe`, `USA` and `Asia`, though.
1036 **Note**: Each client requires its own zone and endpoint configuration. Best practice
1037 is to use the client's FQDN for all object names.
1039 The `master` zone is a parent of the `icinga2-client2.localdomain` zone:
1041 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1043 object Zone "master" {
1044 endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
1047 object Zone "icinga2-client2.localdomain" {
1048 endpoints = [ "icinga2-client2.localdomain" ]
1050 parent = "master" //establish zone hierarchy
1053 Edit the `api` feature on the client `icinga2-client2.localdomain` in
1054 the `/etc/icinga2/features-enabled/api.conf` file and set
1055 `accept_config` to `true`.
1057 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
1059 object ApiListener "api" {
1061 accept_config = true
1064 Now it is time to validate the configuration and to restart the Icinga 2 daemon
1067 Example on CentOS 7:
1069 [root@icinga2-client2.localdomain /]# icinga2 daemon -C
1070 [root@icinga2-client2.localdomain /]# systemctl restart icinga2
1072 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1073 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1076 **Tip**: Best practice is to use a [global zone](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
1077 for common configuration items (check commands, templates, groups, etc.).
1079 Once the clients have connected successfully, it's time for the next step: **execute
1080 a local check on the client using the configuration sync**.
1082 Navigate to `/etc/icinga2/zones.d` on your master node
1083 `icinga2-master1.localdomain` and create a new directory with the same
1084 name as your satellite/client zone name:
1086 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/icinga2-client2.localdomain
1088 Add the host and service objects you want to monitor. There is
1089 no limitation for files and directories -- best practice is to
1090 sort things by type.
1092 By convention a master/satellite/client host object should use the same name as the endpoint object.
1093 You can also add multiple hosts which execute checks against remote services/clients.
1095 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/icinga2-client2.localdomain
1096 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim hosts.conf
1098 object Host "icinga2-client2.localdomain" {
1099 check_command = "hostalive"
1100 address = "192.168.56.112"
1101 zone = "master" //optional trick: sync the required host object to the client, but enforce the "master" zone to execute the check
1104 Given that you are monitoring a Linux client we'll just add a local [disk](10-icinga-template-library.md#plugin-check-command-disk)
1107 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/icinga2-client2.localdomain]# vim services.conf
1109 object Service "disk" {
1110 host_name = "icinga2-client2.localdomain"
1112 check_command = "disk"
1115 Save the changes and validate the configuration on the master node:
1117 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1119 Restart the Icinga 2 daemon (example for CentOS 7):
1121 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1123 The following steps will happen:
1125 * Icinga 2 validates the configuration on `icinga2-master1.localdomain`.
1126 * Icinga 2 copies the configuration into its zone config store in `/var/lib/icinga2/api/zones`.
1127 * The `icinga2-master1.localdomain` node sends a config update event to all endpoints in the same or direct child zones.
1128 * The `icinga2-client2.localdomain` node accepts config and populates the local zone config store with the received config files.
1129 * The `icinga2-client2.localdomain` node validates the configuration and automatically restarts.
1131 Again, there is no interaction required on the client
1134 You can also use the config sync inside a high-availability zone to
1135 ensure that all config objects are synced among zone members.
1137 **Note**: You can only have one so-called "config master" in a zone which stores
1138 the configuration in the `zones.d` directory.
1139 Multiple nodes with configuration files in the `zones.d` directory are
1142 Now that you've learned the basics about the configuration sync, proceed with
1143 the [scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios)
1144 section where you can find detailed information on extending the setup.
1148 If you are eager to start fresh instead you might take a look into the
1149 [Icinga Director](https://github.com/icinga/icingaweb2-module-director).
1151 ## Scenarios <a id="distributed-monitoring-scenarios"></a>
1153 The following examples should give you an idea on how to build your own
1154 distributed monitoring environment. We've seen them all in production
1155 environments and received feedback from our [community](https://www.icinga.com/community/get-involved/)
1156 and [partner support](https://www.icinga.com/services/support/) channels:
1158 * Single master with clients.
1159 * HA master with clients as command endpoint.
1160 * Three level cluster with config HA masters, satellites receiving config sync, and clients checked using command endpoint.
1162 ### Master with Clients <a id="distributed-monitoring-master-clients"></a>
1164 ![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
1166 * `icinga2-master1.localdomain` is the primary master node.
1167 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1171 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1172 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1174 Edit the `zones.conf` configuration file on the master:
1176 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1178 object Endpoint "icinga2-master1.localdomain" {
1181 object Endpoint "icinga2-client1.localdomain" {
1182 host = "192.168.56.111" //the master actively tries to connect to the client
1185 object Endpoint "icinga2-client2.localdomain" {
1186 host = "192.168.56.112" //the master actively tries to connect to the client
1189 object Zone "master" {
1190 endpoints = [ "icinga2-master1.localdomain" ]
1193 object Zone "icinga2-client1.localdomain" {
1194 endpoints = [ "icinga2-client1.localdomain" ]
1199 object Zone "icinga2-client2.localdomain" {
1200 endpoints = [ "icinga2-client2.localdomain" ]
1205 /* sync global commands */
1206 object Zone "global-templates" {
1210 The two client nodes do not necessarily need to know about each other. The only important thing
1211 is that they know about the parent zone and their endpoint members (and optionally the global zone).
1213 If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object,
1214 the client will actively try to connect to the master node. Since we've specified the client
1215 endpoint's attribute on the master node already, we don't want the clients to connect to the
1216 master. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1218 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1220 object Endpoint "icinga2-master1.localdomain" {
1221 //do not actively connect to the master by leaving out the 'host' attribute
1224 object Endpoint "icinga2-client1.localdomain" {
1227 object Zone "master" {
1228 endpoints = [ "icinga2-master1.localdomain" ]
1231 object Zone "icinga2-client1.localdomain" {
1232 endpoints = [ "icinga2-client1.localdomain" ]
1237 /* sync global commands */
1238 object Zone "global-templates" {
1242 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1244 object Endpoint "icinga2-master1.localdomain" {
1245 //do not actively connect to the master by leaving out the 'host' attribute
1248 object Endpoint "icinga2-client2.localdomain" {
1251 object Zone "master" {
1252 endpoints = [ "icinga2-master1.localdomain" ]
1255 object Zone "icinga2-client2.localdomain" {
1256 endpoints = [ "icinga2-client2.localdomain" ]
1261 /* sync global commands */
1262 object Zone "global-templates" {
1266 Now it is time to define the two client hosts and apply service checks using
1267 the command endpoint execution method on them. Note: You can also use the
1268 config sync mode here.
1270 Create a new configuration directory on the master node:
1272 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1274 Add the two client nodes as host objects:
1276 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1277 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1279 object Host "icinga2-client1.localdomain" {
1280 check_command = "hostalive"
1281 address = "192.168.56.111"
1282 vars.client_endpoint = name //follows the convention that host name == endpoint name
1285 object Host "icinga2-client2.localdomain" {
1286 check_command = "hostalive"
1287 address = "192.168.56.112"
1288 vars.client_endpoint = name //follows the convention that host name == endpoint name
1291 Add services using command endpoint checks:
1293 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1295 apply Service "ping4" {
1296 check_command = "ping4"
1297 //check is executed on the master node
1298 assign where host.address
1301 apply Service "disk" {
1302 check_command = "disk"
1304 //specify where the check is executed
1305 command_endpoint = host.vars.client_endpoint
1307 assign where host.vars.client_endpoint
1310 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1312 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1313 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1315 Open Icinga Web 2 and check the two newly created client hosts with two new services
1316 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1318 ### High-Availability Master with Clients <a id="distributed-monitoring-scenarios-ha-master-clients"></a>
1320 ![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
1322 This scenario is similar to the one in the [previous section](06-distributed-monitoring.md#distributed-monitoring-master-clients). The only difference is that we will now set up two master nodes in a high-availability setup.
1323 These nodes must be configured as zone and endpoints objects.
1325 The setup uses the capabilities of the Icinga 2 cluster. All zone members
1326 replicate cluster events amongst each other. In addition to that, several Icinga 2
1327 features can enable HA functionality.
1329 **Note**: All nodes in the same zone require that you enable the same features for high-availability (HA).
1333 * `icinga2-master1.localdomain` is the config master master node.
1334 * `icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`.
1335 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1339 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1340 * Set up `icinga2-master2.localdomain` as [client](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1341 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, set to `y` and add the secondary master `icinga2-master2.localdomain`).
1343 In case you don't want to use the CLI commands, you can also manually create and sync the
1344 required SSL certificates. We will modify and discuss all the details of the automatically generated configuration here.
1346 Since there are now two nodes in the same zone, we must consider the
1347 [high-availability features](06-distributed-monitoring.md#distributed-monitoring-high-availability-features).
1349 * Checks and notifications are balanced between the two master nodes. That's fine, but it requires check plugins and notification scripts to exist on both nodes.
1350 * The IDO feature will only be active on one node by default. Since all events are replicated between both nodes, it is easier to just have one central database.
1352 One possibility is to use a dedicated MySQL cluster VIP (external application cluster)
1353 and leave the IDO feature with enabled HA capabilities. Alternatively,
1354 you can disable the HA feature and write to a local database on each node.
1355 Both methods require that you configure Icinga Web 2 accordingly (monitoring
1356 backend, IDO database, used transports, etc.).
1358 The zone hierarchy could look like this. It involves putting the two master nodes
1359 `icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
1361 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1363 object Endpoint "icinga2-master1.localdomain" {
1364 host = "192.168.56.101"
1367 object Endpoint "icinga2-master2.localdomain" {
1368 host = "192.168.56.102"
1371 object Endpoint "icinga2-client1.localdomain" {
1372 host = "192.168.56.111" //the master actively tries to connect to the client
1375 object Endpoint "icinga2-client2.localdomain" {
1376 host = "192.168.56.112" //the master actively tries to connect to the client
1379 object Zone "master" {
1380 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1383 object Zone "icinga2-client1.localdomain" {
1384 endpoints = [ "icinga2-client1.localdomain" ]
1389 object Zone "icinga2-client2.localdomain" {
1390 endpoints = [ "icinga2-client2.localdomain" ]
1395 /* sync global commands */
1396 object Zone "global-templates" {
1400 The two client nodes do not necessarily need to know about each other. The only important thing
1401 is that they know about the parent zone and their endpoint members (and optionally about the global zone).
1403 If you specify the `host` attribute in the `icinga2-master1.localdomain` and `icinga2-master2.localdomain`
1404 endpoint objects, the client will actively try to connect to the master node. Since we've specified the client
1405 endpoint's attribute on the master node already, we don't want the clients to connect to the
1406 master nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1408 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1410 object Endpoint "icinga2-master1.localdomain" {
1411 //do not actively connect to the master by leaving out the 'host' attribute
1414 object Endpoint "icinga2-master2.localdomain" {
1415 //do not actively connect to the master by leaving out the 'host' attribute
1418 object Endpoint "icinga2-client1.localdomain" {
1421 object Zone "master" {
1422 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1425 object Zone "icinga2-client1.localdomain" {
1426 endpoints = [ "icinga2-client1.localdomain" ]
1431 /* sync global commands */
1432 object Zone "global-templates" {
1436 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1438 object Endpoint "icinga2-master1.localdomain" {
1439 //do not actively connect to the master by leaving out the 'host' attribute
1442 object Endpoint "icinga2-master2.localdomain" {
1443 //do not actively connect to the master by leaving out the 'host' attribute
1446 object Endpoint "icinga2-client2.localdomain" {
1449 object Zone "master" {
1450 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1453 object Zone "icinga2-client2.localdomain" {
1454 endpoints = [ "icinga2-client2.localdomain" ]
1459 /* sync global commands */
1460 object Zone "global-templates" {
1464 Now it is time to define the two client hosts and apply service checks using
1465 the command endpoint execution method. Note: You can also use the
1466 config sync mode here.
1468 Create a new configuration directory on the master node `icinga2-master1.localdomain`.
1469 **Note**: The secondary master node `icinga2-master2.localdomain` receives the
1470 configuration using the [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync).
1472 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1474 Add the two client nodes as host objects:
1476 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
1477 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
1479 object Host "icinga2-client1.localdomain" {
1480 check_command = "hostalive"
1481 address = "192.168.56.111"
1482 vars.client_endpoint = name //follows the convention that host name == endpoint name
1485 object Host "icinga2-client2.localdomain" {
1486 check_command = "hostalive"
1487 address = "192.168.56.112"
1488 vars.client_endpoint = name //follows the convention that host name == endpoint name
1491 Add services using command endpoint checks:
1493 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
1495 apply Service "ping4" {
1496 check_command = "ping4"
1497 //check is executed on the master node
1498 assign where host.address
1501 apply Service "disk" {
1502 check_command = "disk"
1504 //specify where the check is executed
1505 command_endpoint = host.vars.client_endpoint
1507 assign where host.vars.client_endpoint
1510 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1512 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1513 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1515 Open Icinga Web 2 and check the two newly created client hosts with two new services
1516 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1518 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1519 to make sure that your cluster notifies you in case of failure.
1522 ### Three Levels with Master, Satellites, and Clients <a id="distributed-monitoring-scenarios-master-satellite-client"></a>
1524 ![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
1526 This scenario combines everything you've learned so far: High-availability masters,
1527 satellites receiving their configuration from the master zone, and clients checked via command
1528 endpoint from the satellite zones.
1530 **Tip**: It can get complicated, so grab a pen and paper and bring your thoughts to life.
1531 Play around with a test setup before using it in a production environment!
1535 * `icinga2-master1.localdomain` is the configuration master master node.
1536 * `icinga2-master2.localdomain` is the secondary master master node without configuration in `zones.d`.
1537 * `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone.
1538 * `icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients.
1542 * Set up `icinga2-master1.localdomain` as [master](06-distributed-monitoring.md#distributed-monitoring-setup-master).
1543 * Set up `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration).
1544 * Set up `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [clients](06-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
1546 When being asked for the master endpoint providing CSR auto-signing capabilities,
1547 please add the master node which holds the CA and has the `ApiListener` feature configured and enabled.
1548 The parent endpoint must still remain the satellite endpoint name.
1550 Example for `icinga2-client1.localdomain`:
1552 Please specify the master endpoint(s) this node should connect to:
1554 Master is the first satellite `icinga2-satellite1.localdomain`:
1556 Master Common Name (CN from your master setup): icinga2-satellite1.localdomain
1557 Do you want to establish a connection to the master from this node? [Y/n]: y
1558 Please fill out the master connection information:
1559 Master endpoint host (Your master's IP address or FQDN): 192.168.56.105
1560 Master endpoint port [5665]:
1562 Add the second satellite `icinga2-satellite2.localdomain` as master:
1564 Add more master endpoints? [y/N]: y
1565 Master Common Name (CN from your master setup): icinga2-satellite2.localdomain
1566 Do you want to establish a connection to the master from this node? [Y/n]: y
1567 Please fill out the master connection information:
1568 Master endpoint host (Your master's IP address or FQDN): 192.168.56.106
1569 Master endpoint port [5665]:
1570 Add more master endpoints? [y/N]: n
1572 Specify the master node `icinga2-master2.localdomain` with the CA private key and ticket salt:
1574 Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
1575 Host [192.168.56.106]: icinga2-master1.localdomain
1578 In case you cannot connect to the master node from your clients, you'll manually need
1579 to [generate the SSL certificates](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-certificates-manual)
1580 and modify the configuration accordingly.
1582 We'll discuss the details of the required configuration below.
1584 The zone hierarchy can look like this. We'll define only the directly connected zones here.
1586 You can safely deploy this configuration onto all master and satellite zone
1587 members. You should keep in mind to control the endpoint [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction)
1588 using the `host` attribute.
1590 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1592 object Endpoint "icinga2-master1.localdomain" {
1593 host = "192.168.56.101"
1596 object Endpoint "icinga2-master2.localdomain" {
1597 host = "192.168.56.102"
1600 object Endpoint "icinga2-satellite1.localdomain" {
1601 host = "192.168.56.105"
1604 object Endpoint "icinga2-satellite2.localdomain" {
1605 host = "192.168.56.106"
1608 object Zone "master" {
1609 endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
1612 object Zone "satellite" {
1613 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1618 /* sync global commands */
1619 object Zone "global-templates" {
1623 Repeat the configuration step for `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain`
1624 and `icinga2-satellite2.localdomain`.
1626 Since we want to use [top down command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint) checks,
1627 we must configure the client endpoint and zone objects.
1628 In order to minimize the effort, we'll sync the client zone and endpoint configuration to the
1629 satellites where the connection information is needed as well.
1631 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/{master,satellite,global-templates}
1632 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1634 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1636 object Endpoint "icinga2-client1.localdomain" {
1637 host = "192.168.56.111" //the satellite actively tries to connect to the client
1640 object Zone "icinga2-client1.localdomain" {
1641 endpoints = [ "icinga2-client1.localdomain" ]
1643 parent = "satellite"
1646 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1648 object Endpoint "icinga2-client2.localdomain" {
1649 host = "192.168.56.112" //the satellite actively tries to connect to the client
1652 object Zone "icinga2-client2.localdomain" {
1653 endpoints = [ "icinga2-client2.localdomain" ]
1655 parent = "satellite"
1658 The two client nodes do not necessarily need to know about each other, either. The only important thing
1659 is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
1661 If you specify the `host` attribute in the `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain`
1662 endpoint objects, the client node will actively try to connect to the satellite node. Since we've specified the client
1663 endpoint's attribute on the satellite node already, we don't want the client node to connect to the
1664 satellite nodes. **Choose one [connection direction](06-distributed-monitoring.md#distributed-monitoring-advanced-hints-connection-direction).**
1666 Example for `icinga2-client1.localdomain`:
1668 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
1670 object Endpoint "icinga2-satellite1.localdomain" {
1671 //do not actively connect to the satellite by leaving out the 'host' attribute
1674 object Endpoint "icinga2-satellite2.localdomain" {
1675 //do not actively connect to the satellite by leaving out the 'host' attribute
1678 object Endpoint "icinga2-client1.localdomain" {
1682 object Zone "satellite" {
1683 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1686 object Zone "icinga2-client1.localdomain" {
1687 endpoints = [ "icinga2-client1.localdomain" ]
1689 parent = "satellite"
1692 /* sync global commands */
1693 object Zone "global-templates" {
1697 Example for `icinga2-client2.localdomain`:
1699 [root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
1701 object Endpoint "icinga2-satellite1.localdomain" {
1702 //do not actively connect to the satellite by leaving out the 'host' attribute
1705 object Endpoint "icinga2-satellite2.localdomain" {
1706 //do not actively connect to the satellite by leaving out the 'host' attribute
1709 object Endpoint "icinga2-client2.localdomain" {
1713 object Zone "satellite" {
1714 endpoints = [ "icinga2-satellite1.localdomain", "icinga2-satellite2.localdomain" ]
1717 object Zone "icinga2-client2.localdomain" {
1718 endpoints = [ "icinga2-client2.localdomain" ]
1720 parent = "satellite"
1723 /* sync global commands */
1724 object Zone "global-templates" {
1728 Now it is time to define the two client hosts on the master, sync them to the satellites
1729 and apply service checks using the command endpoint execution method to them.
1730 Add the two client nodes as host objects to the `satellite` zone.
1732 We've already created the directories in `/etc/icinga2/zones.d` including the files for the
1733 zone and endpoint configuration for the clients.
1735 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/satellite
1737 Add the host object configuration for the `icinga2-client1.localdomain` client. You should
1738 have created the configuration file in the previous steps and it should contain the endpoint
1739 and zone object configuration already.
1741 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client1.localdomain.conf
1743 object Host "icinga2-client1.localdomain" {
1744 check_command = "hostalive"
1745 address = "192.168.56.111"
1746 vars.client_endpoint = name //follows the convention that host name == endpoint name
1749 Add the host object configuration for the `icinga2-client2.localdomain` client configuration file:
1751 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim icinga2-client2.localdomain.conf
1753 object Host "icinga2-client2.localdomain" {
1754 check_command = "hostalive"
1755 address = "192.168.56.112"
1756 vars.client_endpoint = name //follows the convention that host name == endpoint name
1759 Add a service object which is executed on the satellite nodes (e.g. `ping4`). Pin the apply rule to the `satellite` zone only.
1761 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1763 apply Service "ping4" {
1764 check_command = "ping4"
1765 //check is executed on the satellite node
1766 assign where host.zone == "satellite" && host.address
1769 Add services using command endpoint checks. Pin the apply rules to the `satellite` zone only.
1771 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/satellite]# vim services.conf
1773 apply Service "disk" {
1774 check_command = "disk"
1776 //specify where the check is executed
1777 command_endpoint = host.vars.client_endpoint
1779 assign where host.zone == "satellite" && host.vars.client_endpoint
1782 Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
1784 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
1785 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
1787 Open Icinga Web 2 and check the two newly created client hosts with two new services
1788 -- one executed locally (`ping4`) and one using command endpoint (`disk`).
1790 **Tip**: It's a good idea to add [health checks](06-distributed-monitoring.md#distributed-monitoring-health-checks)
1791 to make sure that your cluster notifies you in case of failure.
1793 ## Best Practice <a id="distributed-monitoring-best-practice"></a>
1795 We've put together a collection of configuration examples from community feedback.
1796 If you like to share your tips and tricks with us, please join the [community channels](https://www.icinga.com/community/get-involved/)!
1798 ### Global Zone for Config Sync <a id="distributed-monitoring-global-zone-config-sync"></a>
1800 Global zones can be used to sync generic configuration objects
1801 to all nodes depending on them. Common examples are:
1803 * Templates which are imported into zone specific objects.
1804 * Command objects referenced by Host, Service, Notification objects.
1805 * Apply rules for services, notifications, dependencies and scheduled downtimes.
1806 * User objects referenced in notifications.
1808 * TimePeriod objects.
1810 Plugin scripts and binaries cannot be synced, this is for Icinga 2
1811 configuration files only. Use your preferred package repository
1812 and/or configuration management tool (Puppet, Ansible, Chef, etc.)
1815 **Note**: Checkable objects (hosts and services) cannot be put into a global
1816 zone. The configuration validation will terminate with an error.
1818 The zone object configuration must be deployed on all nodes which should receive
1819 the global configuration files:
1821 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
1823 object Zone "global-templates" {
1827 Note: Packages >= 2.8 provide this configuration by default.
1829 Similar to the zone configuration sync you'll need to create a new directory in
1830 `/etc/icinga2/zones.d`:
1832 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/global-templates
1834 Next, add a new check command, for example:
1836 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/commands.conf
1838 object CheckCommand "my-cmd" {
1842 Restart the client(s) which should receive the global zone before
1843 before restarting the parent master/satellite nodes.
1845 Then validate the configuration on the master node and restart Icinga 2.
1847 **Tip**: You can copy the example configuration files located in `/etc/icinga2/conf.d`
1848 into your global zone.
1852 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/conf.d
1853 [root@icinga2-master1.localdomain /etc/icinga2/conf.d]# cp {commands,downtimes,groups,notifications,services,templates,timeperiods,users}.conf /etc/icinga2/zones.d/global-templates
1855 ### Health Checks <a id="distributed-monitoring-health-checks"></a>
1857 In case of network failures or other problems, your monitoring might
1858 either have late check results or just send out mass alarms for unknown
1861 In order to minimize the problems caused by this, you should configure
1862 additional health checks.
1864 The `cluster` check, for example, will check if all endpoints in the current zone and the directly
1865 connected zones are working properly:
1867 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1868 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
1870 object Host "icinga2-master1.localdomain" {
1871 check_command = "hostalive"
1872 address = "192.168.56.101"
1875 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1877 object Service "cluster" {
1878 check_command = "cluster"
1882 host_name = "icinga2-master1.localdomain"
1885 The `cluster-zone` check will test whether the configured target zone is currently
1886 connected or not. This example adds a health check for the [ha master with clients scenario](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients).
1888 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
1890 apply Service "cluster-health" {
1891 check_command = "cluster-zone"
1893 display_name = "cluster-health-" + host.name
1895 /* This follows the convention that the client zone name is the FQDN which is the same as the host object name. */
1896 vars.cluster_zone = host.name
1898 assign where host.vars.client_endpoint
1901 In case you cannot assign the `cluster_zone` attribute, add specific
1902 checks to your cluster:
1904 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/cluster.conf
1906 object Service "cluster-zone-satellite" {
1907 check_command = "cluster-zone"
1910 vars.cluster_zone = "satellite"
1912 host_name = "icinga2-master1.localdomain"
1916 If you are using top down checks with command endpoint configuration, you can
1917 add a dependency which prevents notifications for all other failing services:
1919 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/dependencies.conf
1921 apply Dependency "health-check" to Service {
1922 parent_service_name = "child-health"
1925 disable_notifications = true
1927 assign where host.vars.client_endpoint
1928 ignore where service.name == "child-health"
1931 ### Pin Checks in a Zone <a id="distributed-monitoring-pin-checks-zone"></a>
1933 In case you want to pin specific checks to their endpoints in a given zone you'll need to use
1934 the `command_endpoint` attribute. This is reasonable if you want to
1935 execute a local disk check in the `master` Zone on a specific endpoint then.
1937 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/zones.d/master
1938 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-master1.localdomain.conf
1940 object Host "icinga2-master1.localdomain" {
1941 check_command = "hostalive"
1942 address = "192.168.56.101"
1945 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/services.conf
1947 apply Service "disk" {
1948 check_command = "disk"
1950 command_endpoint = host.name //requires a host object matching the endpoint object name e.g. icinga2-master1.localdomain
1952 assign where host.zone == "master" && match("icinga2-master*", host.name)
1955 The `host.zone` attribute check inside the expression ensures that
1956 the service object is only created for host objects inside the `master`
1957 zone. In addition to that the [match](18-library-reference.md#global-functions-match)
1958 function ensures to only create services for the master nodes.
1960 ### Windows Firewall <a id="distributed-monitoring-windows-firewall"></a>
1962 #### ICMP Requests <a id="distributed-monitoring-windows-firewall-icmp"></a>
1964 By default ICMP requests are disabled in the Windows firewall. You can
1965 change that by [adding a new rule](https://support.microsoft.com/en-us/kb/947709).
1967 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
1969 #### Icinga 2 <a id="distributed-monitoring-windows-firewall-icinga2"></a>
1971 If your master/satellite nodes should actively connect to the Windows client
1972 you'll also need to ensure that port `5665` is enabled.
1974 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 5665 (Icinga 2)" dir=in action=allow protocol=TCP localport=5665
1976 #### NSClient++ API <a id="distributed-monitoring-windows-firewall-nsclient-api"></a>
1978 If the [check_nscp_api](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api)
1979 plugin is used to query NSClient++ remotely, you need to ensure that its port is enabled.
1981 C:\WINDOWS\system32>netsh advfirewall firewall add rule name="Open port 8443 (NSClient++ API)" dir=in action=allow protocol=TCP localport=8443
1983 ### Windows Client and Plugins <a id="distributed-monitoring-windows-plugins"></a>
1985 The Icinga 2 package on Windows already provides several plugins.
1986 Detailed [documentation](10-icinga-template-library.md#windows-plugins) is available for all check command definitions.
1988 Add the following `include` statement on all your nodes (master, satellite, client):
1990 vim /etc/icinga2/icinga2.conf
1992 include <windows-plugins>
1994 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
1995 scenario we'll now add a local disk check.
1997 First, add the client node as host object:
1999 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2000 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2002 object Host "icinga2-client2.localdomain" {
2003 check_command = "hostalive"
2004 address = "192.168.56.112"
2005 vars.client_endpoint = name //follows the convention that host name == endpoint name
2006 vars.os_type = "windows"
2009 Next, add the disk check using command endpoint checks (details in the
2010 [disk-windows](10-icinga-template-library.md#windows-plugins-disk-windows) documentation):
2012 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2014 apply Service "disk C:" {
2015 check_command = "disk-windows"
2017 vars.disk_win_path = "C:"
2019 //specify where the check is executed
2020 command_endpoint = host.vars.client_endpoint
2022 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2025 Validate the configuration and restart Icinga 2.
2027 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2028 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2030 Open Icinga Web 2 and check your newly added Windows disk check :)
2032 ![Icinga 2 Client Windows](images/distributed-monitoring/icinga2_distributed_windows_client_disk_icingaweb2.png)
2034 If you want to add your own plugins please check [this chapter](05-service-monitoring.md#service-monitoring-requirements)
2035 for the requirements.
2037 ### Windows Client and NSClient++ <a id="distributed-monitoring-windows-nscp"></a>
2039 There are two methods available for querying NSClient++:
2041 * Query the [HTTP API](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-api) locally or remotely (requires a running NSClient++ service)
2042 * Run a [local CLI check](06-distributed-monitoring.md#distributed-monitoring-windows-nscp-check-local) (does not require NSClient++ as a service)
2044 Both methods have their advantages and disadvantages. One thing to
2045 note: If you rely on performance counter delta calculations such as
2046 CPU utilization, please use the HTTP API instead of the CLI sample call.
2048 #### NSCLient++ with check_nscp_api <a id="distributed-monitoring-windows-nscp-check-api"></a>
2050 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2051 you to install the NSClient++ package. In addition to the Windows plugins you can
2052 use the [nscp_api command](10-icinga-template-library.md#nscp-check-api) provided by the Icinga Template Library (ITL).
2054 The initial setup for the NSClient++ API and the required arguments
2055 is the described in the ITL chapter for the [nscp_api](10-icinga-template-library.md#nscp-check-api) CheckCommand.
2057 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2058 scenario we'll now add a local nscp check which queries the NSClient++ API to check the free disk space.
2060 Define a host object called `icinga2-client2.localdomain` on the master. Add the `nscp_api_password`
2061 custom attribute and specify the drives to check.
2063 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2064 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2066 object Host "icinga2-client1.localdomain" {
2067 check_command = "hostalive"
2068 address = "192.168.56.111"
2069 vars.client_endpoint = name //follows the convention that host name == endpoint name
2070 vars.os_type = "Windows"
2071 vars.nscp_api_password = "icinga"
2072 vars.drives = [ "C:", "D:" ]
2075 The service checks are generated using an [apply for](03-monitoring-basics.md#using-apply-for)
2076 rule based on `host.vars.drives`:
2078 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2080 apply Service "nscp-api-" for (drive in host.vars.drives) {
2081 import "generic-service"
2083 check_command = "nscp_api"
2084 command_endpoint = host.vars.client_endpoint
2086 //display_name = "nscp-drive-" + drive
2088 vars.nscp_api_host = "localhost"
2089 vars.nscp_api_query = "check_drivesize"
2090 vars.nscp_api_password = host.vars.nscp_api_password
2091 vars.nscp_api_arguments = [ "drive=" + drive ]
2093 ignore where host.vars.os_type != "Windows"
2096 Validate the configuration and restart Icinga 2.
2098 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2099 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2101 Two new services ("nscp-drive-D:" and "nscp-drive-C:") will be visible in Icinga Web 2.
2103 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-api](images/distributed-monitoring/icinga2_distributed_windows_nscp_api_drivesize_icingaweb2.png)
2105 Note: You can also omit the `command_endpoint` configuration to execute
2106 the command on the master. This also requires a different value for `nscp_api_host`
2107 which defaults to `host.address`.
2109 //command_endpoint = host.vars.client_endpoint
2111 //vars.nscp_api_host = "localhost"
2113 You can verify the check execution by looking at the `Check Source` attribute
2114 in Icinga Web 2 or the REST API.
2116 If you want to monitor specific Windows services, you could use the following example:
2118 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2119 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2121 object Host "icinga2-client1.localdomain" {
2122 check_command = "hostalive"
2123 address = "192.168.56.111"
2124 vars.client_endpoint = name //follows the convention that host name == endpoint name
2125 vars.os_type = "Windows"
2126 vars.nscp_api_password = "icinga"
2127 vars.services = [ "Windows Update", "wscsvc" ]
2130 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2132 apply Service "nscp-api-" for (svc in host.vars.services) {
2133 import "generic-service"
2135 check_command = "nscp_api"
2136 command_endpoint = host.vars.client_endpoint
2138 //display_name = "nscp-service-" + svc
2140 vars.nscp_api_host = "localhost"
2141 vars.nscp_api_query = "check_service"
2142 vars.nscp_api_password = host.vars.nscp_api_password
2143 vars.nscp_api_arguments = [ "service=" + svc ]
2145 ignore where host.vars.os_type != "Windows"
2148 #### NSCLient++ with nscp-local <a id="distributed-monitoring-windows-nscp-check-local"></a>
2150 The [Windows setup](06-distributed-monitoring.md#distributed-monitoring-setup-client-windows) already allows
2151 you to install the NSClient++ package. In addition to the Windows plugins you can
2152 use the [nscp-local commands](10-icinga-template-library.md#nscp-plugin-check-commands)
2153 provided by the Icinga Template Library (ITL).
2155 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++](images/distributed-monitoring/icinga2_distributed_windows_nscp.png)
2157 Add the following `include` statement on all your nodes (master, satellite, client):
2159 vim /etc/icinga2/icinga2.conf
2163 The CheckCommand definitions will automatically determine the installed path
2164 to the `nscp.exe` binary.
2166 Based on the [master with clients](06-distributed-monitoring.md#distributed-monitoring-master-clients)
2167 scenario we'll now add a local nscp check querying a given performance counter.
2169 First, add the client node as host object:
2171 [root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
2172 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
2174 object Host "icinga2-client1.localdomain" {
2175 check_command = "hostalive"
2176 address = "192.168.56.111"
2177 vars.client_endpoint = name //follows the convention that host name == endpoint name
2178 vars.os_type = "windows"
2181 Next, add a performance counter check using command endpoint checks (details in the
2182 [nscp-local-counter](10-icinga-template-library.md#nscp-check-local-counter) documentation):
2184 [root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
2186 apply Service "nscp-local-counter-cpu" {
2187 check_command = "nscp-local-counter"
2188 command_endpoint = host.vars.client_endpoint
2190 vars.nscp_counter_name = "\\Processor(_total)\\% Processor Time"
2191 vars.nscp_counter_perfsyntax = "Total Processor Time"
2192 vars.nscp_counter_warning = 1
2193 vars.nscp_counter_critical = 5
2195 vars.nscp_counter_showall = true
2197 assign where host.vars.os_type == "windows" && host.vars.client_endpoint
2200 Validate the configuration and restart Icinga 2.
2202 [root@icinga2-master1.localdomain /]# icinga2 daemon -C
2203 [root@icinga2-master1.localdomain /]# systemctl restart icinga2
2205 Open Icinga Web 2 and check your newly added Windows NSClient++ check :)
2207 ![Icinga 2 Distributed Monitoring Windows Client with NSClient++ nscp-local](images/distributed-monitoring/icinga2_distributed_windows_nscp_counter_icingaweb2.png)
2209 ## Advanced Hints <a id="distributed-monitoring-advanced-hints"></a>
2211 You can find additional hints in this section if you prefer to go your own route
2212 with automating setups (setup, certificates, configuration).
2214 ### Certificate Auto-Renewal <a id="distributed-monitoring-certificate-auto-renewal"></a>
2216 Icinga 2 v2.8+ adds the possibility that nodes request certificate updates
2217 on their own. If their expiration date is soon enough, they automatically
2218 renew their already signed certificate by sending a signing request to the
2221 ### High-Availability for Icinga 2 Features <a id="distributed-monitoring-high-availability-features"></a>
2223 All nodes in the same zone require that you enable the same features for high-availability (HA).
2225 By default, the following features provide advanced HA functionality:
2227 * [Checks](06-distributed-monitoring.md#distributed-monitoring-high-availability-checks) (load balanced, automated failover).
2228 * [Notifications](06-distributed-monitoring.md#distributed-monitoring-high-availability-notifications) (load balanced, automated failover).
2229 * [DB IDO](06-distributed-monitoring.md#distributed-monitoring-high-availability-db-ido) (Run-Once, automated failover).
2231 #### High-Availability with Checks <a id="distributed-monitoring-high-availability-checks"></a>
2233 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2234 have the `checker` feature enabled.
2238 # icinga2 feature enable checker
2240 All nodes in the same zone load-balance the check execution. If one instance shuts down,
2241 the other nodes will automatically take over the remaining checks.
2243 #### High-Availability with Notifications <a id="distributed-monitoring-high-availability-notifications"></a>
2245 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2246 have the `notification` feature enabled.
2250 # icinga2 feature enable notification
2252 Notifications are load-balanced amongst all nodes in a zone. By default this functionality
2254 If your nodes should send out notifications independently from any other nodes (this will cause
2255 duplicated notifications if not properly handled!), you can set `enable_ha = false`
2256 in the [NotificationComponent](09-object-types.md#objecttype-notificationcomponent) feature.
2258 #### High-Availability with DB IDO <a id="distributed-monitoring-high-availability-db-ido"></a>
2260 All instances within the same zone (e.g. the `master` zone as HA cluster) must
2261 have the DB IDO feature enabled.
2263 Example DB IDO MySQL:
2265 # icinga2 feature enable ido-mysql
2267 By default the DB IDO feature only runs on one node. All other nodes in the same zone disable
2268 the active IDO database connection at runtime. The node with the active DB IDO connection is
2269 not necessarily the zone master.
2271 **Note**: The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
2272 for the [IdoMysqlConnection](09-object-types.md#objecttype-idomysqlconnection) or
2273 [IdoPgsqlConnection](09-object-types.md#objecttype-idopgsqlconnection) object on **all** nodes in the
2276 All endpoints will enable the DB IDO feature and connect to the configured
2277 database and dump configuration, status and historical data on their own.
2279 If the instance with the active DB IDO connection dies, the HA functionality will
2280 automatically elect a new DB IDO master.
2282 The DB IDO feature will try to determine which cluster endpoint is currently writing
2283 to the database and bail out if another endpoint is active. You can manually verify that
2284 by running the following query command:
2286 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
2287 status_update_time | endpoint_name
2288 ------------------------+---------------
2289 2016-08-15 15:52:26+02 | icinga2-master1.localdomain
2292 This is useful when the cluster connection between endpoints breaks, and prevents
2293 data duplication in split-brain-scenarios. The failover timeout can be set for the
2294 `failover_timeout` attribute, but not lower than 60 seconds.
2296 ### Endpoint Connection Direction <a id="distributed-monitoring-advanced-hints-connection-direction"></a>
2298 Nodes will attempt to connect to another node when its local [Endpoint](09-object-types.md#objecttype-endpoint) object
2299 configuration specifies a valid `host` attribute (FQDN or IP address).
2301 Example for the master node `icinga2-master1.localdomain` actively connecting
2302 to the client node `icinga2-client1.localdomain`:
2304 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2308 object Endpoint "icinga2-client1.localdomain" {
2309 host = "192.168.56.111" //the master actively tries to connect to the client
2313 Example for the client node `icinga2-client1.localdomain` not actively
2314 connecting to the master node `icinga2-master1.localdomain`:
2316 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2320 object Endpoint "icinga2-master1.localdomain" {
2321 //do not actively connect to the master by leaving out the 'host' attribute
2325 It is not necessary that both the master and the client node establish
2326 two connections to each other. Icinga 2 will only use one connection
2327 and close the second connection if established.
2329 **Tip**: Choose either to let master/satellite nodes connect to client nodes
2333 ### Disable Log Duration for Command Endpoints <a id="distributed-monitoring-advanced-hints-command-endpoint-log-duration"></a>
2335 The replay log is a built-in mechanism to ensure that nodes in a distributed setup
2336 keep the same history (check results, notifications, etc.) when nodes are temporarily
2337 disconnected and then reconnect.
2339 This functionality is not needed when a master/satellite node is sending check
2340 execution events to a client which is purely configured for [command endpoint](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2343 The [Endpoint](09-object-types.md#objecttype-endpoint) object attribute `log_duration` can
2344 be lower or set to 0 to fully disable any log replay updates when the
2345 client is not connected.
2347 Configuration on the master node `icinga2-master1.localdomain`:
2349 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
2353 object Endpoint "icinga2-client1.localdomain" {
2354 host = "192.168.56.111" //the master actively tries to connect to the client
2358 object Endpoint "icinga2-client2.localdomain" {
2359 host = "192.168.56.112" //the master actively tries to connect to the client
2363 Configuration on the client `icinga2-client1.localdomain`:
2365 [root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
2369 object Endpoint "icinga2-master1.localdomain" {
2370 //do not actively connect to the master by leaving out the 'host' attribute
2374 object Endpoint "icinga2-master2.localdomain" {
2375 //do not actively connect to the master by leaving out the 'host' attribute
2379 ### CSR auto-signing with HA and multiple Level Cluster <a id="distributed-monitoring-advanced-hints-csr-autosigning-ha-satellites"></a>
2381 If you are using two masters in a High-Availability setup it can be necessary
2382 to allow both to sign requested certificates. Ensure to safely sync the following
2385 * `TicketSalt` constant in `constants.conf`.
2386 * `var/lib/icinga2/ca` directory.
2388 This also helps if you are using a [three level cluster](06-distributed-monitoring.md#distributed-monitoring-scenarios-master-satellite-client)
2389 and your client nodes are not able to reach the CSR auto-signing master node(s).
2390 Make sure that the directory permissions for `/var/lib/icinga2/ca` are secure
2391 (not world readable).
2393 **Do not expose these private keys to anywhere else. This is a matter of security.**
2395 ### Manual Certificate Creation <a id="distributed-monitoring-advanced-hints-certificates-manual"></a>
2397 #### Create CA on the Master <a id="distributed-monitoring-advanced-hints-certificates-manual-ca"></a>
2399 Choose the host which should store the certificate authority (one of the master nodes).
2401 The first step is the creation of the certificate authority (CA) by running the following command
2404 [root@icinga2-master1.localdomain /root]# icinga2 pki new-ca
2406 #### Create CSR and Certificate <a id="distributed-monitoring-advanced-hints-certificates-manual-create"></a>
2408 Create a certificate signing request (CSR) for the local instance:
2411 [root@icinga2-master1.localdomain /root]# icinga2 pki new-cert --cn icinga2-master1.localdomain \
2412 --key icinga2-master1.localdomain.key \
2413 --csr icinga2-master1.localdomain.csr
2416 Sign the CSR with the previously created CA:
2419 [root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
2422 Repeat the steps for all instances in your setup.
2426 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2429 #### Copy Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-copy"></a>
2431 Copy the host's certificate files and the public CA certificate to `/var/lib/icinga2/certs`:
2434 [root@icinga2-master1.localdomain /root]# mkdir -p /var/lib/icinga2/certs
2435 [root@icinga2-master1.localdomain /root]# cp icinga2-master1.localdomain.{crt,key} /var/lib/icinga2/certs
2436 [root@icinga2-master1.localdomain /root]# cp /var/lib/icinga2/ca/ca.crt /var/lib/icinga2/certs
2439 Ensure that proper permissions are set (replace `icinga` with the Icinga 2 daemon user):
2442 [root@icinga2-master1.localdomain /root]# chown -R icinga:icinga /var/lib/icinga2/certs
2443 [root@icinga2-master1.localdomain /root]# chmod 600 /var/lib/icinga2/certs/*.key
2444 [root@icinga2-master1.localdomain /root]# chmod 644 /var/lib/icinga2/certs/*.crt
2447 The CA public and private key are stored in the `/var/lib/icinga2/ca` directory. Keep this path secure and include
2450 #### Create Multiple Certificates <a id="distributed-monitoring-advanced-hints-certificates-manual-multiple"></a>
2452 Use your preferred method to automate the certificate generation process.
2455 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do icinga2 pki new-cert --cn $node --csr $node.csr --key $node.key; done
2456 information/base: Writing private key to 'icinga2-master1.localdomain.key'.
2457 information/base: Writing certificate signing request to 'icinga2-master1.localdomain.csr'.
2458 information/base: Writing private key to 'icinga2-master2.localdomain.key'.
2459 information/base: Writing certificate signing request to 'icinga2-master2.localdomain.csr'.
2460 information/base: Writing private key to 'icinga2-satellite1.localdomain.key'.
2461 information/base: Writing certificate signing request to 'icinga2-satellite1.localdomain.csr'.
2463 [root@icinga2-master1.localdomain /var/lib/icinga2/certs]# for node in icinga2-master1.localdomain icinga2-master2.localdomain icinga2-satellite1.localdomain; do sudo icinga2 pki sign-csr --csr $node.csr --cert $node.crt; done
2464 information/pki: Writing certificate to file 'icinga2-master1.localdomain.crt'.
2465 information/pki: Writing certificate to file 'icinga2-master2.localdomain.crt'.
2466 information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt'.
2469 Copy and move these certificates to the respective instances e.g. with SSH/SCP.
2471 ## Automation <a id="distributed-monitoring-automation"></a>
2473 These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)
2474 or custom scripts for automated setup.
2476 These are collected best practices from various community channels.
2478 * [Silent Windows setup](06-distributed-monitoring.md#distributed-monitoring-automation-windows-silent)
2479 * [Node Setup CLI command](06-distributed-monitoring.md#distributed-monitoring-automation-cli-node-setup) with parameters
2481 If you prefer an alternate method, we still recommend leaving all the Icinga 2 features intact (e.g. `icinga2 feature enable api`).
2482 You should also use well known and documented default configuration file locations (e.g. `zones.conf`).
2483 This will tremendously help when someone is trying to help in the [community channels](https://www.icinga.com/community/get-involved/).
2486 ### Silent Windows Setup <a id="distributed-monitoring-automation-windows-silent"></a>
2488 If you want to install the client silently/unattended, use the `/qn` modifier. The
2489 installation should not trigger a restart, but if you want to be completely sure, you can use the `/norestart` modifier.
2491 C:> msiexec /i C:\Icinga2-v2.5.0-x86.msi /qn /norestart
2493 Once the setup is completed you can use the `node setup` cli command too.
2495 ### Node Setup using CLI Parameters <a id="distributed-monitoring-automation-cli-node-setup"></a>
2497 Instead of using the `node wizard` CLI command, there is an alternative `node setup`
2498 command available which has some prerequisites.
2500 **Note**: The CLI command can be used on Linux/Unix and Windows operating systems.
2501 The graphical Windows setup wizard actively uses these CLI commands.
2503 #### Node Setup on the Master Node <a id="distributed-monitoring-automation-cli-node-setup-master"></a>
2505 In case you want to setup a master node you must add the `--master` parameter
2506 to the `node setup` CLI command. In addition to that the `--cn` can optionally
2507 be passed (defaults to the FQDN).
2509 Parameter | Description
2510 --------------------|--------------------
2511 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN. Defaults to the FQDN.
2512 Listen on | **Optional.** Specified with the `--listen` parameter. Syntax is `host,port`.
2516 [root@icinga2-master1.localdomain /]# icinga2 node setup --master
2518 In case you want to bind the `ApiListener` object to a specific
2519 host/port you can specify it like this:
2521 --listen 192.68.56.101,5665
2524 #### Node Setup with Satellites/Clients <a id="distributed-monitoring-automation-cli-node-setup-satellite-client"></a>
2528 > The certificate location changed in v2.8 to `/var/lib/icinga2/certs`. Please read the [upgrading chapter](16-upgrading-icinga-2.md#upgrading-to-2-8-certificate-paths)
2531 Make sure that the `/var/lib/icinga2/certs` directory exists and is owned by the `icinga`
2532 user (or the user Icinga 2 is running as).
2534 [root@icinga2-client1.localdomain /]# mkdir -p /var/lib/icinga2/certs
2535 [root@icinga2-client1.localdomain /]# chown -R icinga:icinga /var/lib/icinga2/certs
2537 First you'll need to generate a new local self-signed certificate.
2538 Pass the following details to the `pki new-cert` CLI command:
2540 Parameter | Description
2541 --------------------|--------------------
2542 Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
2543 Client certificate files | **Required.** These generated files will be put into the specified location (--key and --file). By convention this should be using `/var/lib/icinga2/certs` as directory.
2547 [root@icinga2-client1.localdomain /]# icinga2 pki new-cert --cn icinga2-client1.localdomain \
2548 --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2549 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt
2551 Request the master certificate from the master host (`icinga2-master1.localdomain`)
2552 and store it as `trusted-master.crt`. Review it and continue.
2554 Pass the following details to the `pki save-cert` CLI command:
2556 Parameter | Description
2557 --------------------|--------------------
2558 Client certificate files | **Required.** Pass the previously generated files using the `--key` and `--cert` parameters.
2559 Trusted master certificate | **Required.** Store the master's certificate file. Manually verify that you're trusting it.
2560 Master host | **Required.** FQDN or IP address of the master host.
2564 [root@icinga2-client1.localdomain /]# icinga2 pki save-cert --key /var/lib/icinga2/certs/icinga2-client1.localdomain.key \
2565 --cert /var/lib/icinga2/certs/icinga2-client1.localdomain.crt \
2566 --trustedcert /var/lib/icinga2/certs/trusted-master.crt \
2567 --host icinga2-master1.localdomain
2569 Continue with the additional node setup step. Specify a local endpoint and zone name (`icinga2-client1.localdomain`)
2570 and set the master host (`icinga2-master1.localdomain`) as parent zone configuration. Specify the path to
2571 the previously stored trusted master certificate.
2573 Pass the following details to the `node setup` CLI command:
2575 Parameter | Description
2576 --------------------|--------------------
2577 Common name (CN) | **Optional.** Specified with the `--cn` parameter. By convention this should be the host's FQDN.
2578 Request ticket | **Required.** Add the previously generated [ticket number](06-distributed-monitoring.md#distributed-monitoring-setup-csr-auto-signing).
2579 Trusted master certificate | **Required.** Add the previously fetched trusted master certificate (this step means that you've verified its origin).
2580 Master endpoint | **Required.** Specify the master's endpoint name.
2581 Client zone name | **Required.** Specify the client's zone name.
2582 Master host | **Required.** FQDN or IP address of the master host.
2583 Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync)).
2584 Accept commands | **Optional.** Whether this node accepts command execution messages from the master node (required for [command endpoint mode](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)).
2586 Example for Icinga 2 v2.8:
2588 [root@icinga2-client1.localdomain /]# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
2589 --cn icinga2-client1.localdomain \
2590 --endpoint icinga2-master1.localdomain \
2591 --zone icinga2-client1.localdomain \
2592 --master_host icinga2-master1.localdomain \
2593 --trustedcert /var/lib/icinga2/certs/trusted-master.crt \
2594 --accept-commands --accept-config
2596 In case the client should connect to the master node, you'll
2597 need to modify the `--endpoint` parameter using the format `cn,host,port`:
2599 --endpoint icinga2-master1.localdomain,192.168.56.101,5665
2601 Restart Icinga 2 afterwards:
2603 # service icinga2 restart
2605 **You can find additional best practices below.**
2607 Add an additional global zone. Please note the `>>` append mode.
2609 [root@icinga2-client1.localdomain /]# cat <<EOF >>/etc/icinga2/zones.conf
2610 object Zone "global-templates" {
2615 Note: Packages >= 2.8 provide this configuration by default.
2617 If this client node is configured as [remote command endpoint execution](06-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint)
2618 you can safely disable the `checker` feature. The `node setup` CLI command already disabled the `notification` feature.
2620 [root@icinga2-client1.localdomain /]# icinga2 feature disable checker
2622 Disable "conf.d" inclusion if this is a [top down](06-distributed-monitoring.md#distributed-monitoring-top-down)
2625 [root@icinga2-client1.localdomain /]# sed -i 's/include_recursive "conf.d"/\/\/include_recursive "conf.d"/g' /etc/icinga2/icinga2.conf
2627 **Optional**: Add an ApiUser object configuration for remote troubleshooting.
2629 [root@icinga2-client1.localdomain /]# cat <<EOF >/etc/icinga2/conf.d/api-users.conf
2630 object ApiUser "root" {
2631 password = "clientsupersecretpassword"
2636 In case you've previously disabled the "conf.d" directory only
2637 add the file file `conf.d/api-users.conf`:
2639 [root@icinga2-client1.localdomain /]# echo 'include "conf.d/api-users.conf"' >> /etc/icinga2/icinga2.conf
2641 Finally restart Icinga 2.
2643 [root@icinga2-client1.localdomain /]# systemctl restart icinga2
2645 Your automation tool must then configure master node in the meantime.
2646 Add the global zone `global-templates` in case it did not exist.
2648 # cat <<EOF >>/etc/icinga2/zones.conf
2649 object Endpoint "icinga2-client1.localdomain" {
2650 //client connects itself
2653 object Zone "icinga2-client1.localdomain" {
2654 endpoints = [ "icinga2-client1.localdomain" ]
2658 object Zone "global-templates" {