1 # <a id="monitoring-remote-systems"></a> Monitoring Remote Systems
3 There are multiple ways you can monitor remote clients. Be it using [agent-less](#agent-less-checks)
4 or [agent-based](agent-based-checks-addons) using additional addons & tools.
6 Icinga 2 uses its own unique and secure communitication protol amongst instances.
7 Be it an High-Availability cluster setup, distributed load-balanced setup or just a single
8 agent [monitoring a remote client](#icinga2-remote-client-monitoring).
10 All communication is secured by SSL x509, and fully supports IPv4 and IPv6.
12 If you are planning to use the native Icinga 2 cluster feature for distributed
13 monitoring and high-availability, please continue reading in
14 [this chapter](#distributed-monitoring-high-availability).
18 > Don't panic - there are CLI commands available, including setup wizards for easy installation
19 > with SSL certificates.
20 > If you prefer to use your own CA (for example Puppet) you can do that as well.
22 ## <a id="agent-less-checks"></a> Agent-less Checks
24 If the remote service is available using a network protocol and port,
25 and a [check plugin](#setting-up-check-plugins) is available, you don't
26 necessarily need a local client installed. Rather choose a plugin and
27 configure all parameters and thresholds. The [Icinga 2 Template Library](#itl)
28 already ships various examples like
30 * [ping4](#plugin-check-command-ping4), [ping6](#plugin-check-command-ping6),
31 [fping4](#plugin-check-command-fping4), [fping6](#plugin-check-command-fping6), [hostalive](#plugin-check-command-hostalive)
32 * [tcp](#plugin-check-command-tcp), [udp](#plugin-check-command-udp), [ssl](#plugin-check-command-ssl)
33 * [http](#plugin-check-command-http), [ftp](#plugin-check-command-ftp)
34 * [smtp](#plugin-check-command-smtp), [ssmtp](#plugin-check-command-ssmtp),
35 [imap](#plugin-check-command-imap), [simap](#plugin-check-command-simap),
36 [pop](#plugin-check-command-pop), [spop](#plugin-check-command-spop)
37 * [ntp_time](#plugin-check-command-ntp_time)
38 * [ssh](#plugin-check-command-ssh)
39 * [dns](#plugin-check-command-dns), [dig](#plugin-check-command-dig), [dhcp](#plugin-check-command-dhcp)
41 There are numerous check plugins contributed by community members available
42 on the internet. If you found one for your requirements, [integrate them into Icinga 2](#command-plugin-integration).
46 * [Icinga Exchange](https://exchange.icinga.org)
47 * [Icinga Wiki](https://wiki.icinga.org)
50 ## <a id="icinga2-remote-client-monitoring"></a> Monitoring Icinga 2 Remote Clients
52 First, you should decide which role the remote client has:
54 * a single host with local checks
55 * a remote satellite checking other hosts (for example in your DMZ)
59 > If you are planning to build an Icinga 2 distributed setup using the cluster feature, please skip
60 > the following instructions and jump directly to the
61 > [cluster setup instructions](#distributed-monitoring-high-availability).
65 > Remote instances are independent Icinga 2 instances which schedule
66 > their checks and just synchronize them back to the defined master zone.
68 ## <a id="icinga2-remote-monitoring-master"></a> Master Setup for Remote Monitoring
70 If you are planning to use the [remote Icinga 2 clients](#icinga2-remote-monitoring-client)
71 you'll first need to update your master setup.
73 Your master setup requires the following
75 * SSL CA and signed certificate for the master
76 * Enabled API feature, and a local Endpoint and Zone object configuration
77 * Firewall ACLs for the communication port (default 5665)
79 You can use the [cli command](#cli-command-node) `node wizard` for setting up a new node
80 on the master. The command must be run as root, all Icinga 2 specific files
81 will be updated to the icinga user the daemon is running as (certificate files
84 Make sure to answer the first question with `n` (no).
88 Welcome to the Icinga 2 Setup Wizard!
90 We'll guide you through all required configuration details.
92 If you have questions, please consult the documentation at http://docs.icinga.org
93 or join the community support channels at https://support.icinga.org
96 Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: n
97 Starting the Master setup routine...
98 Please specifiy the common name (CN) [icinga2m]:
99 information/base: Writing private key to '/var/lib/icinga2/ca/ca.key'.
100 information/base: Writing X509 certificate to '/var/lib/icinga2/ca/ca.crt'.
101 information/cli: Initializing serial file in '/var/lib/icinga2/ca/serial.txt'.
102 information/cli: Generating new CSR in '/etc/icinga2/pki/icinga2m.csr'.
103 information/base: Writing private key to '/etc/icinga2/pki/icinga2m.key'.
104 information/base: Writing certificate signing request to '/etc/icinga2/pki/icinga2m.csr'.
105 information/cli: Signing CSR with CA and writing certificate to '/etc/icinga2/pki/icinga2m.crt'.
106 information/cli: Copying CA certificate to '/etc/icinga2/pki/ca.crt'.
107 information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
108 Please specify the API bind host/port (optional):
111 information/cli: Enabling the APIlistener feature.
112 information/cli: Updating constants.conf.
113 information/cli: Updating constants file '/etc/icinga2/constants.conf'.
114 information/cli: Updating constants file '/etc/icinga2/constants.conf'.
115 information/cli: Edit the constants.conf file '/etc/icinga2/constants.conf' and set a secure 'TicketSalt' constant.
118 Now restart your Icinga 2 daemon to finish the installation!
120 If you encounter problems or bugs, please do not hesitate to
121 get in touch with the community at https://support.icinga.org
124 The setup wizard will do the following:
126 * Generate a local CA in `/var/lib/icinga2/ca` or use the existing one
127 * Generate a new CSR, sign it with the local CA and copying it into `/etc/icinga2/pki`
128 * Generate a local zone and endpoint configuration for this master based on FQDN
129 * Enabling the API feature, and setting optional `bind_host` and `bind_port`
130 * Setting the `NodeName` and `TicketSalt` constants in [constants.conf](#constants.conf)
132 The setup wizard does not automatically restart Icinga 2.
137 > This setup wizard will install a standalone master, HA cluster scenarios are currently
142 ## <a id="icinga2-remote-monitoring-client"></a> Client Setup for Remote Monitoring
144 Icinga 2 can be installed on Linux/Unix and Windows. While
145 [Linux/Unix](#icinga2-remote-monitoring-client-linux) will be using the [cli command](#cli-command-node)
146 `node wizard` for a guided setup, you will need to use the
147 graphical installer for Windows based client setup.
149 Your client setup requires the following
151 * SSL signed certificate for communication with the master (Use [CSR auto-signing](certifiates-csr-autosigning)).
152 * Enabled API feature, and a local Endpoint and Zone object configuration
153 * Firewall ACLs for the communication port (default 5665)
157 ### <a id="icinga2-remote-monitoring-client-linux"></a> Linux Client Setup for Remote Monitoring
159 #### <a id="csr-autosigning-requirements"></a> Requirements for CSR Auto-Signing
161 If your remote clients are capable of connecting to the central master, Icinga 2
162 supports CSR auto-signing.
164 First you'll need to define a secure ticket salt in the [constants.conf](#constants-conf).
165 The [setup wizard for the master setup](#icinga2-remote-monitoring-master) will create
168 # grep TicketSalt /etc/icinga2/constants.conf
170 The client setup wizard will ask you to generate a valid ticket number using its CN.
171 If you already know your remote client's Common Names (CNs) - usually the FQDN - you
172 can generate all ticket numbers on-demand.
174 This is also reasonable if you are not capable of installing the remote client, but
175 a colleague of yours, or a customer.
177 Example for a client notebook:
179 # icinga2 pki ticket --cn nbmif.int.netways.de
183 > You can omit the `--salt` parameter using the `TicketSalt` constant from
184 > [constants.conf](#constants-conf) if already defined and Icinga 2 was
185 > reloaded after the master setup.
187 #### <a id="certificates-manual-creation"></a> Manual SSL Certificate Generation
189 This is described separately in the [cluster setup chapter](#manual-certificate-generation).
193 > If you're using [CSR Auto-Signing](#csr-autosigning-requirements), skip this step.
196 #### <a id="icinga2-remote-monitoring-client-linux-setup"></a> Linux Client Setup Wizard for Remote Monitoring
198 Install Icinga 2 from your distribution's package repository as described in the
199 general [installation instructions](#setting-up-icinga2).
201 Please make sure that either [CSR Auto-Signing](#csr-autosigning-requirements) requirements
202 are fulfilled, or that you're using [manual SSL certificate generation](#manual-certificate-generation).
206 > You don't need any features (DB IDO, Livestatus) or user interfaces on the remote client.
207 > Install them only if you're planning to use them.
209 Once the package installation succeeded, use the `node wizard` cli command to install
210 a new Icinga 2 node as client setup.
212 You'll need the following configuration details:
214 * The client common name (CN). Defaults to FQDN.
215 * The client's local zone name. Defaults to FQDN.
216 * The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
217 * The master endpoint connection information. Your master's IP address and port (defaults to 5665)
218 * The [request ticket number](#csr-autosigning-requirements) generated on your master
220 * Bind host/port for the Api feature (optional)
222 The command must be run as root, all Icinga 2 specific files will be updated to the icinga
223 user the daemon is running as (certificate files for example).
225 Make sure to answer the first question with `n` (no).
228 # icinga2 node wizard
230 Welcome to the Icinga 2 Setup Wizard!
232 We'll guide you through all required configuration details.
234 If you have questions, please consult the documentation at http://docs.icinga.org
235 or join the community support channels at https://support.icinga.org
238 Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]:
239 Starting the Node setup routine...
240 Please specifiy the common name (CN) [nbmif.int.netways.de]:
241 Please specifiy the local zone name [nbmif.int.netways.de]:
242 Please specify the master endpoint(s) this node should connect to:
243 Master Common Name (CN from your master setup, defaults to FQDN): icinga2m
244 Please fill out the master connection information:
245 Master endpoint host (required, your master's IP address or FQDN): 192.168.33.100
246 Master endpoint port (optional) []:
247 Add more master endpoints? [y/N]
248 Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
249 Host [192.168.33.100]:
251 information/base: Writing private key to '/var/lib/icinga2/ca/ca.key'.
252 information/base: Writing X509 certificate to '/var/lib/icinga2/ca/ca.crt'.
253 information/cli: Initializing serial file in '/var/lib/icinga2/ca/serial.txt'.
254 information/base: Writing private key to '/etc/icinga2/pki/nbmif.int.netways.de.key'.
255 information/base: Writing X509 certificate to '/etc/icinga2/pki/nbmif.int.netways.de.crt'.
256 information/cli: Generating self-signed certifiate:
257 information/cli: Fetching public certificate from master (192.168.33.100, 5665):
259 information/cli: Writing trusted certificate to file '/etc/icinga2/pki/trusted-master.crt'.
260 information/cli: Stored trusted master certificate in '/etc/icinga2/pki/trusted-master.crt'.
262 Please specify the request ticket generated on your Icinga 2 master.
263 (Hint: '# icinga2 pki ticket --cn nbmif.int.netways.de'):
264 2e070405fe28f311a455b53a61614afd718596a1
265 information/cli: Processing self-signed certificate request. Ticket '2e070405fe28f311a455b53a61614afd718596a1'.
267 information/cli: Writing signed certificate to file '/etc/icinga2/pki/nbmif.int.netways.de.crt'.
268 information/cli: Writing CA certificate to file '/var/lib/icinga2/ca/ca.crt'.
269 Please specify the API bind host/port (optional):
272 information/cli: Disabling the Notification feature.
273 Disabling feature notification. Make sure to restart Icinga 2 for these changes to take effect.
274 information/cli: Enabling the Apilistener feature.
275 information/cli: Generating local zones.conf.
276 information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
277 information/cli: Updating constants.conf.
278 information/cli: Updating constants file '/etc/icinga2/constants.conf'.
281 Now restart your Icinga 2 daemon to finish the installation!
283 If you encounter problems or bugs, please do not hesitate to
284 get in touch with the community at https://support.icinga.org
287 The setup wizard will do the following:
289 * Generate a local CA in `/var/lib/icinga2/ca` or use the existing one
290 * Generate a new CSR, sign it with the local CA and copying it into `/etc/icinga2/pki`
291 * Store the master's certificate as trusted certificate for requesting a new signed certificate
292 (manual step when using `node setup`).
293 * Request a new signed certificate from the master and store updated certificate and master CA in `/etc/icinga2/pki`
294 * Generate a local zone and endpoint configuration for this client and the provided master information
296 * Disabling the notification feature for this client
297 * Enabling the API feature, and setting optional `bind_host` and `bind_port`
298 * Setting the `NodeName` constant in [constants.conf](#constants.conf)
300 The setup wizard does not automatically restart Icinga 2.
302 If you are getting an error when requesting the ticket number, please check the following:
304 * Is the CN the same (from pki ticket on the master and setup node on the client)
305 * Is the ticket expired
308 ### <a id="icinga2-remote-monitoring-client-windows"></a> Windows Client Setup for Remote Monitoring
310 Download the MSI-Installer package from [http://packages.icinga.org/windows/](http://packages.icinga.org/windows/).
313 * [Microsoft .NET Framework 2.0](http://www.microsoft.com/de-de/download/details.aspx?id=1639) if not already installed.
315 The setup wizard will install Icinga 2 and then continue with SSL certificate generation,
316 CSR-Autosigning and configuration setup.
318 You'll need the following configuration details:
320 * The client common name (CN). Defaults to FQDN.
321 * The client's local zone name. Defaults to FQDN.
322 * The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
323 * The master endpoint connection information. Your master's IP address and port (defaults to 5665)
324 * The [request ticket number](#csr-autosigning-requirements) generated on your master
326 * Bind host/port for the Api feature (optional)
328 Once install is done, Icinga 2 is automatically started as a Windows service.
333 ### <a id="icinga2-remote-monitoring-client-configuration"></a> Client Configuration for Remote Monitoring
335 There is no difference in the configuration syntax on clients to any other Icinga 2 installation.
337 The following convention applies to remote clients:
339 * The hostname in the default host object should be the same as the Common Name (CN) used for SSL setup
340 * Add new services and check commands locally
342 The default setup routine will install a new host based on your FQDN in `repository.d/hosts` with all
343 services in separate configuration files a directory underneath.
345 The repository can be managed using the cli command `repository`.
349 > The cli command `repository` only supports basic configuration manipulation (add, remove). Future
350 > versions will support more options (set, etc.). Please check the Icinga 2 development roadmap
353 You can also use additional features like notifications directly on the remote client, if you are
354 required to. Basically everything a single Icinga 2 instance provides by default.
357 ### <a id="icinga2-remote-monitoring-master-discovery"></a> Discover Client Services on the Master
359 Icinga 2 clients will sync their locally defined objects to the defined master node. That way you can
360 list, add, filter and remove nodes based on their `node`, `zone`, `host` or `service` name.
362 List all discovered nodes (satellites, agents) and their hosts/services:
367 #### <a id="icinga2-remote-monitoring-master-discovery-manual"></a> Manually Discover Clients on the Master
369 Add a to-be-discovered client to the master:
371 # icinga2 node add my-remote-client
373 Set the connection details, and the Icinga 2 master will attempt to connect to this node and sync its
376 # icinga2 node set my-remote-client --host 192.168.33.101 --port 5665
378 You can control that by calling the `node list` command:
381 Node 'my-remote-client' (host: 192.168.33.101, port: 5665, log duration: 1 day, last seen: Sun Nov 2 17:46:29 2014)
383 #### <a id="icinga2-remote-monitoring-master-discovery-remove"></a> Remove Discovered Clients
385 If you don't require a connected agent, you can manually remove it and its discovered hosts and services
386 using the following cli command:
388 # icinga2 node remove my-discovered-agent
392 > Better use [blacklists and/or whitelists](#icinga2-remote-monitoring-master-discovery-blacklist-whitelist)
393 > to control which clients and hosts/services are integrated into your master configuration repository.
395 ### <a id="icinga2-remote-monitoring-master-discovery-generate-config"></a> Generate Icinga 2 Configuration for Client Services on the Master
397 There is a dedicated Icinga 2 CLI command for updating the client services on the master,
398 generating all required configuration.
400 # icinga2 node update-config
402 The generated configuration of all nodes is stored in the `repository.d/` directory.
404 By default, the following additional configuration is generated:
405 * add `Endpoint` and `Zone` objects for the newly added node
406 * add `cluster-zone` health check for the master host detecting if the remote node died
407 * use the default templates `satellite-host` and `satellite-service` defined in `/etc/icinga2/conf.d/satellite.conf`
408 * apply a dependency for all other hosts on the remote satellite prevening failure checks/notifications
412 > If there are existing hosts/services defined or modified, the cli command will not overwrite these (modified)
413 > configuration files.
415 > If hosts or services disappeared from the client discovery, it will remove the existing configuration objects
416 > from the config repository.
418 The `update-config` cli command will fail, if there are uncommitted changes for the
419 configuration repository.
420 Please review these changes manually, or clear the commit and try again. This is a
421 safety hook to prevent unwanted manual changes to be committed by a updating the
422 client discovered objects only.
424 # icinga2 repository commit --simulate
426 # icinga2 repository clear-changes
428 # icinga2 repository commit
430 After updating the configuration repository, make sure to reload Icinga 2.
432 # service icinga2 reload
435 # systemctl reload icinga2.service
439 #### <a id="icinga2-remote-monitoring-master-discovery-blacklist-whitelist"></a> Blacklist/Whitelist for Clients on the Master
441 It's sometimes necessary to `blacklist` an entire remote client, or specific hosts or services
442 provided by this client. While it's reasonable for the local admin to configure for example an
443 additional ping check, you're not interested in that on the master sending out notifications
444 and presenting the dashboard to your support team.
446 Blacklisting an entire set might not be sufficient for excluding several objects, be it a
447 specific remote client with one ping servie you're interested in. Therefore you can `whitelist`
448 clients, hosts, services in a similar manner
450 Example for blacklisting all `ping*` services, but allowing only `probe` host with `ping4`:
452 # icinga2 node blacklist add --zone "*" --host "*" --service "ping*"
453 # icinga2 node whitelist add --zone "*" --host "probe" --service "ping*"
455 You can `list` and `remove` existing blacklists:
457 # icinga2 node blacklist list
458 Listing all blacklist entries:
459 blacklist filter for Node: '*' Host: '*' Service: 'ping*'.
461 # icinga2 node whitelist list
462 Listing all whitelist entries:
463 whitelist filter for Node: '*' Host: 'probe' Service: 'ping*'.
468 > The `--zone` and `--host` arguments are required. A zone is always where the remote client is in.
469 > If you are unsure about it, set a wildcard (`*`) for them and filter only by host/services.
472 ### <a id="icinga2-remote-monitoring-master-manual-add-endpoint-zone"></a> Manually add Client Endpoint and Zone Objects on the Master
474 Define a [Zone](#objecttype-zone) with a new [Endpoint](#objecttype-endpoint) similar to the cluster setup.
476 * [configure the node name](#configure-nodename)
477 * [configure the ApiListener object](#configure-apilistener-object)
478 * [configure cluster endpoints](#configure-cluster-endpoints)
479 * [configure cluster zones](#configure-cluster-zones)
481 on a per remote client basis. If you prefer to synchronize the configuration to remote
482 clients, you can also use the cluster provided [configuration sync](#cluster-zone-config-sync)
486 ### <a id="agent-based-checks-addon"></a> Agent-based Checks using additional Software
488 If the remote services are not directly accessible through the network, a
489 local agent installation exposing the results to check queries can
492 ### <a id="agent-based-checks-snmp"></a> SNMP
494 The SNMP daemon runs on the remote system and answers SNMP queries by plugin
495 binaries. The [Monitoring Plugins package](#setting-up-check-plugins) ships
496 the `check_snmp` plugin binary, but there are plenty of [existing plugins](#integrate-additional-plugins)
497 for specific use cases already around, for example monitoring Cisco routers.
499 The following example uses the [SNMP ITL](#plugin-check-command-snmp) `CheckCommand` and just
500 overrides the `snmp_oid` custom attribute. A service is created for all hosts which
501 have the `snmp-community` custom attribute.
503 apply Service "uptime" {
504 import "generic-service"
506 check_command = "snmp"
507 vars.snmp_oid = "1.3.6.1.2.1.1.3.0"
509 assign where host.vars.snmp_community != ""
512 Additional SNMP plugins are available using the [Manubulon SNMP Plugins](#snmp-manubulon-plugin-check-commands).
514 ### <a id="agent-based-checks-ssh"></a> SSH
516 Calling a plugin using the SSH protocol to execute a plugin on the remote server fetching
517 its return code and output. The `by_ssh` command object is part of the built-in templates and
518 requires the `check_by_ssh` check plugin which is available in the [Monitoring Plugins package](#setting-up-check-plugins).
520 object CheckCommand "by_ssh_swap" {
523 vars.by_ssh_command = "/usr/lib/nagios/plugins/check_swap -w $by_ssh_swap_warn$ -c $by_ssh_swap_crit$"
524 vars.by_ssh_swap_warn = "75%"
525 vars.by_ssh_swap_crit = "50%"
528 object Service "swap" {
529 import "generic-service"
531 host_name = "remote-ssh-host"
533 check_command = "by_ssh_swap"
535 vars.by_ssh_logname = "icinga"
538 ### <a id="agent-based-checks-nrpe"></a> NRPE
540 [NRPE](http://docs.icinga.org/latest/en/nrpe.html) runs as daemon on the remote client including
541 the required plugins and command definitions.
542 Icinga 2 calls the `check_nrpe` plugin binary in order to query the configured command on the
545 The NRPE daemon uses its own configuration format in nrpe.cfg while `check_nrpe`
546 can be embedded into the Icinga 2 `CheckCommand` configuration syntax.
548 You can use the `check_nrpe` plugin from the NRPE project to query the NRPE daemon.
549 Icinga 2 provides the [nrpe check command](#plugin-check-command-nrpe) for this:
553 object Service "users" {
554 import "generic-service"
556 host_name = "remote-nrpe-host"
558 check_command = "nrpe"
559 vars.nrpe_command = "check_users"
564 command[check_users]=/usr/local/icinga/libexec/check_users -w 5 -c 10
566 ### <a id="agent-based-checks-nsclient"></a> NSClient++
568 [NSClient++](http://nsclient.org) works on both Windows and Linux platforms and is well
569 known for its magnificent Windows support. There are alternatives like the WMI interface,
570 but using `NSClient++` will allow you to run local scripts similar to check plugins fetching
571 the required output and performance counters.
573 You can use the `check_nt` plugin from the Monitoring Plugins project to query NSClient++.
574 Icinga 2 provides the [nscp check command](#plugin-check-command-nscp) for this:
578 object Service "disk" {
579 import "generic-service"
581 host_name = "remote-windows-host"
583 check_command = "nscp"
585 vars.nscp_variable = "USEDDISKSPACE"
586 vars.nscp_params = "c"
591 For details on the `NSClient++` configuration please refer to the [official documentation](http://www.nsclient.org/nscp/wiki/doc/configuration/0.4.x).
593 ### <a id="agent-based-checks-nsca-ng"></a> NSCA-NG
595 [NSCA-ng](http://www.nsca-ng.org) provides a client-server pair that allows the
596 remote sender to push check results into the Icinga 2 `ExternalCommandListener`
601 > This addon works in a similar fashion like the Icinga 1.x distributed model. If you
602 > are looking for a real distributed architecture with Icinga 2, scroll down.
604 ### <a id="agent-based-checks-snmp-traps"></a> Passive Check Results and SNMP Traps
606 SNMP Traps can be received and filtered by using [SNMPTT](http://snmptt.sourceforge.net/) and specific trap handlers
607 passing the check results to Icinga 2.
611 > The host and service object configuration must be available on the Icinga 2
612 > server in order to process passive check results.
617 ## <a id="distributed-monitoring-high-availability"></a> Distributed Monitoring and High Availability
619 Building distributed environments with high availability included is fairly easy with Icinga 2.
620 The cluster feature is built-in and allows you to build many scenarios based on your requirements:
622 * [High Availability](#cluster-scenarios-high-availability). All instances in the `Zone` elect one active master and run as Active/Active cluster.
623 * [Distributed Zones](#cluster-scenarios-distributed-zones). A master zone and one or more satellites in their zones.
624 * [Load Distribution](#cluster-scenarios-load-distribution). A configuration master and multiple checker satellites.
626 You can combine these scenarios into a global setup fitting your requirements.
628 Each instance got their own event scheduler, and does not depend on a centralized master
629 coordinating and distributing the events. In case of a cluster failure, all nodes
630 continue to run independently. Be alarmed when your cluster fails and a Split-Brain-scenario
631 is in effect - all alive instances continue to do their job, and history will begin to differ.
635 > Before you start, make sure to read the [requirements](#distributed-monitoring-requirements).
638 ### <a id="cluster-requirements"></a> Cluster Requirements
640 Before you start deploying, keep the following things in mind:
642 * Your [SSL CA and certificates](#certificate-authority-certificates) are mandatory for secure communication
643 * Get pen and paper or a drawing board and design your nodes and zones!
644 * all nodes in a cluster zone are providing high availability functionality and trust each other
645 * cluster zones can be built in a Top-Down-design where the child trusts the parent
646 * communication between zones happens bi-directional which means that a DMZ-located node can still reach the master node, or vice versa
647 * Update firewall rules and ACLs
648 * Decide whether to use the built-in [configuration syncronization](#cluster-zone-config-sync) or use an external tool (Puppet, Ansible, Chef, Salt, etc) to manage the configuration deployment
653 > If you're looking for troubleshooting cluster problems, check the general
654 > [troubleshooting](#troubleshooting-cluster) section.
657 ### <a id="manual-certificate-generation"></a> Manual SSL Certificate Generation
659 Icinga 2 ships [cli commands](#cli-command-pki) assisting with CA and node certificate creation
660 for your Icinga 2 distributed setup.
664 > You're free to use your own method to generated a valid ca and signed client
667 The first step is the creation of the certificate authority (CA) by running the
672 Now create a certificate and key file for each node running the following command
673 (replace `icinga2a` with the required hostname):
675 # icinga2 pki new-cert --cn icinga2a --key icinga2a.key --csr icinga2a.csr
676 # icinga2 pki sign-csr --csr icinga2a.csr --cert icinga2a.crt
678 Repeat the step for all nodes in your cluster scenario.
680 Save the CA key in a secure location in case you want to set up certificates for
681 additional nodes at a later time.
683 Navigate to the location of your newly generated certificate files, and manually
684 copy/transfer them to `/etc/icinga2/pki` in your Icinga 2 configuration folder.
688 > The certificate files must be readable by the user Icinga 2 is running as. Also,
689 > the private key file must not be world-readable.
691 Each node requires the following files in `/etc/icinga2/pki` (replace `fqdn-nodename` with
695 * <fqdn-nodename>.crt
696 * <fqdn-nodename>.key
700 #### <a id="cluster-naming-convention"></a> Cluster Naming Convention
702 The SSL certificate common name (CN) will be used by the [ApiListener](#objecttype-apilistener)
703 object to determine the local authority. This name must match the local [Endpoint](#objecttype-endpoint)
708 # icinga2 pki new-cert --cn icinga2a --key icinga2a.key --csr icinga2a.csr
709 # icinga2 pki sign-csr --csr icinga2a.csr --cert icinga2a.crt
713 object Endpoint "icinga2a" {
714 host = "icinga2a.icinga.org"
717 The [Endpoint](#objecttype-endpoint) name is further referenced as `endpoints` attribute on the
718 [Zone](#objecttype-zone) object.
720 object Endpoint "icinga2b" {
721 host = "icinga2b.icinga.org"
724 object Zone "config-ha-master" {
725 endpoints = [ "icinga2a", "icinga2b" ]
728 Specifying the local node name using the [NodeName](#configure-nodename) variable requires
729 the same name as used for the endpoint name and common name above. If not set, the FQDN is used.
731 const NodeName = "icinga2a"
734 ### <a id="cluster-configuration"></a> Cluster Configuration
736 The following section describe which configuration must be updated/created
737 in order to get your cluster running with basic functionality.
739 * [configure the node name](#configure-nodename)
740 * [configure the ApiListener object](#configure-apilistener-object)
741 * [configure cluster endpoints](#configure-cluster-endpoints)
742 * [configure cluster zones](#configure-cluster-zones)
744 Once you're finished with the basic setup the following section will
745 describe how to use [zone configuration synchronisation](#cluster-zone-config-sync)
746 and configure [cluster scenarios](#cluster-scenarios).
748 #### <a id="configure-nodename"></a> Configure the Icinga Node Name
750 Instead of using the default FQDN as node name you can optionally set
751 that value using the [NodeName](#global-constants) constant.
755 > Skip this step if your FQDN already matches the default `NodeName` set
756 > in `/etc/icinga2/constants.conf`.
758 This setting must be unique for each node, and must also match
759 the name of the local [Endpoint](#objecttype-endpoint) object and the
760 SSL certificate common name as described in the
761 [cluster naming convention](#cluster-naming-convention).
763 vim /etc/icinga2/constants.conf
765 /* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
766 * This should be the common name from the API certificate.
768 const NodeName = "icinga2a"
771 Read further about additional [naming conventions](#cluster-naming-convention).
773 Not specifying the node name will make Icinga 2 using the FQDN. Make sure that all
774 configured endpoint names and common names are in sync.
776 #### <a id="configure-apilistener-object"></a> Configure the ApiListener Object
778 The [ApiListener](#objecttype-apilistener) object needs to be configured on
779 every node in the cluster with the following settings:
781 A sample config looks like:
783 object ApiListener "api" {
784 cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
785 key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
786 ca_path = SysconfDir + "/icinga2/pki/ca.crt"
790 You can simply enable the `api` feature using
792 # icinga2 feature enable api
794 Edit `/etc/icinga2/features-enabled/api.conf` if you require the configuration
795 synchronisation enabled for this node. Set the `accept_config` attribute to `true`.
799 > The certificate files must be readable by the user Icinga 2 is running as. Also,
800 > the private key file must not be world-readable.
802 #### <a id="configure-cluster-endpoints"></a> Configure Cluster Endpoints
804 `Endpoint` objects specify the `host` and `port` settings for the cluster nodes.
805 This configuration can be the same on all nodes in the cluster only containing
806 connection information.
808 A sample configuration looks like:
811 * Configure config master endpoint
814 object Endpoint "icinga2a" {
815 host = "icinga2a.icinga.org"
818 If this endpoint object is reachable on a different port, you must configure the
819 `ApiListener` on the local `Endpoint` object accordingly too.
821 #### <a id="configure-cluster-zones"></a> Configure Cluster Zones
823 `Zone` objects specify the endpoints located in a zone. That way your distributed setup can be
824 seen as zones connected together instead of multiple instances in that specific zone.
826 Zones can be used for [high availability](#cluster-scenarios-high-availability),
827 [distributed setups](#cluster-scenarios-distributed-zones) and
828 [load distribution](#cluster-scenarios-load-distribution).
830 Each Icinga 2 `Endpoint` must be put into its respective `Zone`. In this example, you will
831 define the zone `config-ha-master` where the `icinga2a` and `icinga2b` endpoints
832 are located. The `check-satellite` zone consists of `icinga2c` only, but more nodes could
835 The `config-ha-master` zone acts as High-Availability setup - the Icinga 2 instances elect
836 one active master where all features are running on (for example `icinga2a`). In case of
837 failure of the `icinga2a` instance, `icinga2b` will take over automatically.
839 object Zone "config-ha-master" {
840 endpoints = [ "icinga2a", "icinga2b" ]
843 The `check-satellite` zone is a separated location and only sends back their checkresults to
844 the defined parent zone `config-ha-master`.
846 object Zone "check-satellite" {
847 endpoints = [ "icinga2c" ]
848 parent = "config-ha-master"
852 ### <a id="cluster-zone-config-sync"></a> Zone Configuration Synchronisation
854 By default all objects for specific zones should be organized in
856 /etc/icinga2/zones.d/<zonename>
858 on the configuration master.
860 Your child zones and endpoint members **must not** have their config copied to `zones.d`.
861 The built-in configuration synchronisation takes care of that if your nodes accept
862 configuration from the parent zone. You can define that in the
863 [ApiListener](#configure-apilistener-object) object by configuring the `accept_config`
864 attribute accordingly.
866 You should remove the sample config included in `conf.d` by commenting the `recursive_include`
867 statement in [icinga2.conf](#icinga2-conf):
869 //include_recursive "conf.d"
871 Better use a dedicated directory name like `cluster` or similar, and include that
872 one if your nodes require local configuration not being synced to other nodes. That's
873 useful for local [health checks](#cluster-health-check) for example.
877 > In a [high availability](#cluster-scenarios-high-availability)
878 > setup only one assigned node can act as configuration master. All other zone
879 > member nodes **must not** have the `/etc/icinga2/zones.d` directory populated.
881 These zone packages are then distributed to all nodes in the same zone, and
882 to their respective target zone instances.
884 Each configured zone must exist with the same directory name. The parent zone
885 syncs the configuration to the child zones, if allowed using the `accept_config`
886 attribute of the [ApiListener](#configure-apilistener-object) object.
888 Config on node `icinga2a`:
890 object Zone "master" {
891 endpoints = [ "icinga2a" ]
894 object Zone "checker" {
895 endpoints = [ "icinga2b" ]
906 Config on node `icinga2b`:
908 object Zone "master" {
909 endpoints = [ "icinga2a" ]
912 object Zone "checker" {
913 endpoints = [ "icinga2b" ]
918 EMPTY_IF_CONFIG_SYNC_ENABLED
920 If the local configuration is newer than the received update Icinga 2 will skip the synchronisation
925 > `zones.d` must not be included in [icinga2.conf](#icinga2-conf). Icinga 2 automatically
926 > determines the required include directory. This can be overridden using the
927 > [global constant](#global-constants) `ZonesDir`.
929 #### <a id="zone-global-config-templates"></a> Global Configuration Zone for Templates
931 If your zone configuration setup shares the same templates, groups, commands, timeperiods, etc.
932 you would have to duplicate quite a lot of configuration objects making the merged configuration
933 on your configuration master unique.
937 > Only put templates, groups, etc into this zone. DO NOT add checkable objects such as
938 > hosts or services here. If they are checked by all instances globally, this will lead
939 > into duplicated check results and unclear state history. Not easy to troubleshoot too -
940 > you've been warned.
942 That is not necessary by defining a global zone shipping all those templates. By setting
943 `global = true` you ensure that this zone serving common configuration templates will be
944 synchronized to all involved nodes (only if they accept configuration though).
946 Config on configuration master:
958 In this example, the global zone is called `global-templates` and must be defined in
959 your zone configuration visible to all nodes.
961 object Zone "global-templates" {
967 > If the remote node does not have this zone configured, it will ignore the configuration
968 > update, if it accepts synchronized configuration.
970 If you don't require any global configuration, skip this setting.
972 #### <a id="zone-config-sync-permissions"></a> Zone Configuration Synchronisation Permissions
974 Each [ApiListener](#objecttype-apilistener) object must have the `accept_config` attribute
975 set to `true` to receive configuration from the parent `Zone` members. Default value is `false`.
977 object ApiListener "api" {
978 cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
979 key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
980 ca_path = SysconfDir + "/icinga2/pki/ca.crt"
984 If `accept_config` is set to `false`, this instance won't accept configuration from remote
985 master instances anymore.
989 > Look into the [troubleshooting guides](#troubleshooting-cluster-config-sync) for debugging
990 > problems with the configuration synchronisation.
993 ### <a id="cluster-health-check"></a> Cluster Health Check
995 The Icinga 2 [ITL](#itl) ships an internal check command checking all configured
996 `EndPoints` in the cluster setup. The check result will become critical if
997 one or more configured nodes are not connected.
1001 object Service "cluster" {
1002 check_command = "cluster"
1006 host_name = "icinga2a"
1009 Each cluster node should execute its own local cluster health check to
1010 get an idea about network related connection problems from different
1013 Additionally you can monitor the connection from the local zone to the remote
1016 Example for the `checker` zone checking the connection to the `master` zone:
1018 object Service "cluster-zone-master" {
1019 check_command = "cluster-zone"
1022 vars.cluster_zone = "master"
1024 host_name = "icinga2b"
1028 ### <a id="cluster-scenarios"></a> Cluster Scenarios
1030 All cluster nodes are full-featured Icinga 2 instances. You only need to enabled
1031 the features for their role (for example, a `Checker` node only requires the `checker`
1032 feature enabled, but not `notification` or `ido-mysql` features).
1034 #### <a id="cluster-scenarios-security"></a> Security in Cluster Scenarios
1036 While there are certain capabilities to ensure the safe communication between all
1037 nodes (firewalls, policies, software hardening, etc) the Icinga 2 cluster also provides
1038 additional security itself:
1040 * [SSL certificates](#certificate-authority-certificates) are mandatory for cluster communication.
1041 * Child zones only receive event updates (check results, commands, etc) for their configured updates.
1042 * Zones cannot influence/interfere other zones. Each checked object is assigned to only one zone.
1043 * All nodes in a zone trust each other.
1044 * [Configuration sync](#zone-config-sync-permissions) is disabled by default.
1046 #### <a id="cluster-scenarios-features"></a> Features in Cluster Zones
1048 Each cluster zone may use all available features. If you have multiple locations
1049 or departments, they may write to their local database, or populate graphite.
1050 Even further all commands are distributed amongst connected nodes. For example, you could
1051 re-schedule a check or acknowledge a problem on the master, and it gets replicated to the
1052 actual slave checker node.
1054 DB IDO on the left, graphite on the right side - works (if you disable
1055 [DB IDO HA](#high-availability-db-ido)).
1056 Icinga Web 2 on the left, checker and notifications on the right side - works too.
1057 Everything on the left and on the right side - make sure to deal with
1058 [load-balanced notifications and checks](#high-availability-features) in a
1059 [HA zone](#cluster-scenarios-high-availability).
1060 configure-cluster-zones
1061 #### <a id="cluster-scenarios-distributed-zones"></a> Distributed Zones
1063 That scenario fits if your instances are spread over the globe and they all report
1064 to a master instance. Their network connection only works towards the master master
1065 (or the master is able to connect, depending on firewall policies) which means
1066 remote instances won't see each/connect to each other.
1068 All events (check results, downtimes, comments, etc) are synced to the master node,
1069 but the remote nodes can still run local features such as a web interface, reporting,
1070 graphing, etc. in their own specified zone.
1072 Imagine the following example with a master node in Nuremberg, and two remote DMZ
1073 based instances in Berlin and Vienna. Additonally you'll specify
1074 [global templates](#zone-global-config-templates) available in all zones.
1076 The configuration tree on the master instance `nuremberg` could look like this:
1089 The configuration deployment will take care of automatically synchronising
1090 the child zone configuration:
1092 * The master node sends `zones.d/berlin` to the `berlin` child zone.
1093 * The master node sends `zones.d/vienna` to the `vienna` child zone.
1094 * The master node sends `zones.d/global-templates` to the `vienna` and `berlin` child zones.
1096 The endpoint configuration would look like:
1098 object Endpoint "nuremberg-master" {
1099 host = "nuremberg.icinga.org"
1102 object Endpoint "berlin-satellite" {
1103 host = "berlin.icinga.org"
1106 object Endpoint "vienna-satellite" {
1107 host = "vienna.icinga.org"
1110 The zones would look like:
1112 object Zone "nuremberg" {
1113 endpoints = [ "nuremberg-master" ]
1116 object Zone "berlin" {
1117 endpoints = [ "berlin-satellite" ]
1118 parent = "nuremberg"
1121 object Zone "vienna" {
1122 endpoints = [ "vienna-satellite" ]
1123 parent = "nuremberg"
1126 object Zone "global-templates" {
1130 The `nuremberg-master` zone will only execute local checks, and receive
1131 check results from the satellite nodes in the zones `berlin` and `vienna`.
1135 > The child zones `berlin` and `vienna` will get their configuration synchronised
1136 > from the configuration master 'nuremberg'. The endpoints in the child
1137 > zones **must not** have their `zones.d` directory populated if this endpoint
1138 > [accepts synced configuration](#zone-config-sync-permissions).
1140 #### <a id="cluster-scenarios-load-distribution"></a> Load Distribution
1142 If you are planning to off-load the checks to a defined set of remote workers
1143 you can achieve that by:
1145 * Deploying the configuration on all nodes.
1146 * Let Icinga 2 distribute the load amongst all available nodes.
1148 That way all remote check instances will receive the same configuration
1149 but only execute their part. The master instance located in the `master` zone
1150 can also execute checks, but you may also disable the `Checker` feature.
1152 Configuration on the master node:
1159 If you are planning to have some checks executed by a specific set of checker nodes
1160 you have to define additional zones and define these check objects there.
1164 object Endpoint "master-node" {
1165 host = "master.icinga.org"
1168 object Endpoint "checker1-node" {
1169 host = "checker1.icinga.org"
1172 object Endpoint "checker2-node" {
1173 host = "checker2.icinga.org"
1179 object Zone "master" {
1180 endpoints = [ "master-node" ]
1183 object Zone "checker" {
1184 endpoints = [ "checker1-node", "checker2-node" ]
1188 object Zone "global-templates" {
1194 > The child zones `checker` will get its configuration synchronised
1195 > from the configuration master 'master'. The endpoints in the child
1196 > zone **must not** have their `zones.d` directory populated if this endpoint
1197 > [accepts synced configuration](#zone-config-sync-permissions).
1199 #### <a id="cluster-scenarios-high-availability"></a> Cluster High Availability
1201 High availability with Icinga 2 is possible by putting multiple nodes into
1202 a dedicated [zone](#configure-cluster-zones). All nodes will elect one
1203 active master, and retry an election once the current active master is down.
1205 Selected features provide advanced [HA functionality](#high-availability-features).
1206 Checks and notifications are load-balanced between nodes in the high availability
1209 Connections from other zones will be accepted by all active and passive nodes
1210 but all are forwarded to the current active master dealing with the check results,
1213 object Zone "config-ha-master" {
1214 endpoints = [ "icinga2a", "icinga2b", "icinga2c" ]
1217 Two or more nodes in a high availability setup require an [initial cluster sync](#initial-cluster-sync).
1221 > Keep in mind that **only one node acts as configuration master** having the
1222 > configuration files in the `zones.d` directory. All other nodes **must not**
1223 > have that directory populated. Instead they are required to
1224 > [accept synced configuration](#zone-config-sync-permissions).
1225 > Details in the [Configuration Sync Chapter](#cluster-zone-config-sync).
1227 #### <a id="cluster-scenarios-multiple-hierachies"></a> Multiple Hierachies
1229 Your master zone collects all check results for reporting and graphing and also
1230 does some sort of additional notifications.
1231 The customers got their own instances in their local DMZ zones. They are limited to read/write
1232 only their services, but replicate all events back to the master instance.
1233 Within each DMZ there are additional check instances also serving interfaces for local
1234 departments. The customers instances will collect all results, but also send them back to
1235 your master instance.
1236 Additionally the customers instance on the second level in the middle prohibits you from
1237 sending commands to the subjacent department nodes. You're only allowed to receive the
1238 results, and a subset of each customers configuration too.
1240 Your master zone will generate global reports, aggregate alert notifications, and check
1241 additional dependencies (for example, the customers internet uplink and bandwidth usage).
1243 The customers zone instances will only check a subset of local services and delegate the rest
1244 to each department. Even though it acts as configuration master with a master dashboard
1245 for all departments managing their configuration tree which is then deployed to all
1246 department instances. Furthermore the master NOC is able to see what's going on.
1248 The instances in the departments will serve a local interface, and allow the administrators
1249 to reschedule checks or acknowledge problems for their services.
1252 ### <a id="high-availability-features"></a> High Availability for Icinga 2 features
1254 All nodes in the same zone require the same features enabled for High Availability (HA)
1257 By default the following features provide advanced HA functionality:
1259 * [Checks](#high-availability-checks) (load balanced, automated failover)
1260 * [Notifications](#high-availability-notifications) (load balanced, automated failover)
1261 * [DB IDO](#high-availability-db-ido) (Run-Once, automated failover)
1263 #### <a id="high-availability-checks"></a> High Availability with Checks
1265 All nodes in the same zone load-balance the check execution. When one instance
1266 fails the other nodes will automatically take over the reamining checks.
1270 > If a node should not check anything, disable the `checker` feature explicitely and
1273 # icinga2 feature disable checker
1274 # service icinga2 reload
1276 #### <a id="high-availability-notifications"></a> High Availability with Notifications
1278 Notifications are load balanced amongst all nodes in a zone. By default this functionality
1280 If your nodes should notify independent from any other nodes (this will cause
1281 duplicated notifications if not properly handled!), you can set `enable_ha = false`
1282 in the [NotificationComponent](#objecttype-notificationcomponent) feature.
1284 #### <a id="high-availability-db-ido"></a> High Availability with DB IDO
1286 All instances within the same zone (e.g. the `master` zone as HA cluster) must
1287 have the DB IDO feature enabled.
1289 Example DB IDO MySQL:
1291 # icinga2 feature enable ido-mysql
1292 The feature 'ido-mysql' is already enabled.
1294 By default the DB IDO feature only runs on the elected zone master. All other passive
1295 nodes disable the active IDO database connection at runtime.
1299 > The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
1300 > for the [IdoMysqlConnection](#objecttype-idomysqlconnection) or
1301 > [IdoPgsqlConnection](#objecttype-idopgsqlconnection) object on all nodes in the
1304 > All endpoints will enable the DB IDO feature then, connect to the configured
1305 > database and dump configuration, status and historical data on their own.
1307 If the instance with the active DB IDO connection dies, the HA functionality will
1308 re-enable the DB IDO connection on the newly elected zone master.
1310 The DB IDO feature will try to determine which cluster endpoint is currently writing
1311 to the database and bail out if another endpoint is active. You can manually verify that
1312 by running the following query:
1314 icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
1315 status_update_time | endpoint_name
1316 ------------------------+---------------
1317 2014-08-15 15:52:26+02 | icinga2a
1320 This is useful when the cluster connection between endpoints breaks, and prevents
1321 data duplication in split-brain-scenarios. The failover timeout can be set for the
1322 `failover_timeout` attribute, but not lower than 60 seconds.
1325 ### <a id="cluster-add-node"></a> Add a new cluster endpoint
1327 These steps are required for integrating a new cluster endpoint:
1329 * generate a new [SSL client certificate](#certificate-authority-certificates)
1330 * identify its location in the zones
1331 * update the `zones.conf` file on each involved node ([endpoint](#configure-cluster-endpoints), [zones](#configure-cluster-zones))
1332 * a new slave zone node requires updates for the master and slave zones
1333 * verify if this endpoints requires [configuration synchronisation](#cluster-zone-config-sync) enabled
1334 * if the node requires the existing zone history: [initial cluster sync](#initial-cluster-sync)
1335 * add a [cluster health check](#cluster-health-check)
1337 #### <a id="initial-cluster-sync"></a> Initial Cluster Sync
1339 In order to make sure that all of your cluster nodes have the same state you will
1340 have to pick one of the nodes as your initial "master" and copy its state file
1341 to all the other nodes.
1343 You can find the state file in `/var/lib/icinga2/icinga2.state`. Before copying
1344 the state file you should make sure that all your cluster nodes are properly shut
1348 ### <a id="host-multiple-cluster-nodes"></a> Host With Multiple Cluster Nodes
1350 Special scenarios might require multiple cluster nodes running on a single host.
1351 By default Icinga 2 and its features will place their runtime data below the prefix
1352 `LocalStateDir`. By default packages will set that path to `/var`.
1353 You can either set that variable as constant configuration
1354 definition in [icinga2.conf](#icinga2-conf) or pass it as runtime variable to
1355 the Icinga 2 daemon.
1357 # icinga2 -c /etc/icinga2/node1/icinga2.conf -DLocalStateDir=/opt/node1/var