1 # <a id="configuring-icinga2-first-steps"></a> Configuring Icinga 2: First Steps
3 This chapter provides an introduction into best practices with your Icinga 2 configuration.
4 The configuration files which are automatically created when installing the Icinga 2 packages
5 are a good way to start with Icinga 2.
7 If you're interested in a detailed explanation of each language feature used in those
8 configuration files you can find more information in the [Language Reference](19-language-reference.md#language-reference)
11 ## <a id="configuration-best-practice"></a> Configuration Best Practice
13 If you are ready to configure additional hosts, services, notifications,
14 dependencies, etc, you should think about the requirements first and then
15 decide for a possible strategy.
17 There are many ways of creating Icinga 2 configuration objects:
19 * Manually with your preferred editor, for example vi(m), nano, notepad, etc.
20 * Generated by a [configuration management tool](13-addons-plugins.md#configuration-tools) such as Puppet, Chef, Ansible, etc.
21 * A configuration addon for Icinga 2
22 * A custom exporter script from your CMDB or inventory tool
25 In order to find the best strategy for your own configuration, ask yourself the following questions:
27 * Do your hosts share a common group of services (for example linux hosts with disk, load, etc checks)?
28 * Only a small set of users receives notifications and escalations for all hosts/services?
30 If you can at least answer one of these questions with yes, look for the
31 [apply rules](3-monitoring-basics.md#using-apply) logic instead of defining objects on a per
32 host and service basis.
34 * You are required to define specific configuration for each host/service?
35 * Does your configuration generation tool already know about the host-service-relationship?
37 Then you should look for the object specific configuration setting `host_name` etc accordingly.
39 Finding the best files and directory tree for your configuration is up to you. Make sure that
40 the [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) configuration file includes them,
43 * tree-based on locations, hostgroups, specific host attributes with sub levels of directories.
44 * flat `hosts.conf`, `services.conf`, etc files for rule based configuration.
45 * generated configuration with one file per host and a global configuration for groups, users, etc.
46 * one big file generated from an external application (probably a bad idea for maintaining changes).
49 In either way of choosing the right strategy you should additionally check the following:
51 * Are there any specific attributes describing the host/service you could set as `vars` custom attributes?
52 You can later use them for applying assign/ignore rules, or export them into external interfaces.
53 * Put hosts into hostgroups, services into servicegroups and use these attributes for your apply rules.
54 * Use templates to store generic attributes for your objects and apply rules making your configuration more readable.
55 Details can be found in the [using templates](3-monitoring-basics.md#object-inheritance-using-templates) chapter.
56 * Apply rules may overlap. Keep a central place (for example, [services.conf](4-configuring-icinga-2.md#services-conf) or [notifications.conf](4-configuring-icinga-2.md#notifications-conf)) storing
57 the configuration instead of defining apply rules deep in your configuration tree.
58 * Every plugin used as check, notification or event command requires a `Command` definition.
59 Further details can be looked up in the [check commands](3-monitoring-basics.md#check-commands) chapter.
61 If you happen to have further questions, do not hesitate to join the
62 [community support channels](https://support.icinga.org)
63 and ask community members for their experience and best practices.
65 ## <a id="configuring-icinga2-overview"></a> Configuration Overview
67 ### <a id="icinga2-conf"></a> icinga2.conf
69 An example configuration file is installed for you in `/etc/icinga2/icinga2.conf`.
71 Here's a brief description of the example configuration:
74 * Icinga 2 configuration file
75 * - this is where you define settings for the Icinga application including
76 * which hosts/services to check.
78 * For an overview of all available configuration options please refer
79 * to the documentation that is distributed as part of Icinga 2.
82 Icinga 2 supports [C/C++-style comments](19-language-reference.md#comments).
85 * The constants.conf defines global constants.
87 include "constants.conf"
89 The `include` directive can be used to include other files.
92 * The zones.conf defines zones for a cluster setup.
93 * Not required for single instance setups.
98 * The Icinga Template Library (ITL) provides a number of useful templates
99 * and command definitions.
100 * Common monitoring plugin command definitions are included separately.
106 * The features-available directory contains a number of configuration
107 * files for features which can be enabled and disabled using the
108 * icinga2 feature enable / icinga2 feature disable CLI commands.
109 * These commands work by creating and removing symbolic links in
110 * the features-enabled directory.
112 include "features-enabled/*.conf"
114 This `include` directive takes care of including the configuration files for all
115 the features which have been enabled with `icinga2 feature enable`. See
116 [Enabling/Disabling Features](8-cli-commands.md#features) for more details.
119 * The repository.d directory contains all configuration objects
120 * managed by the 'icinga2 repository' CLI commands.
122 include_recursive "repository.d"
124 This `include_recursive` directive is used for discovery of services on remote clients
125 and their generated configuration described in
126 [this chapter](10-icinga2-client.md#icinga2-remote-monitoring-master-discovery).
130 * Although in theory you could define all your objects in this file
131 * the preferred way is to create separate directories and files in the conf.d
132 * directory. Each of these files must have the file extension ".conf".
134 include_recursive "conf.d"
136 You can put your own configuration files in the [conf.d](4-configuring-icinga-2.md#conf-d) directory. This
137 directive makes sure that all of your own configuration files are included.
140 * The zones.d directory contains configuration files for satellite
143 include_zones "etc", "zones.d"
145 Configuration files for satellite instances are managed in 'zones'. This directive ensures
146 that all configuration files in the `zones.d` directory are included and that the `zones`
147 attribute for objects defined in this directory is set appropriately.
149 ### <a id="constants-conf"></a> constants.conf
151 The `constants.conf` configuration file can be used to define global constants.
153 By default, you need to make sure to set these constants:
155 * The `PluginDir` constant must be set to the path where the [Monitoring Project plugins](2-getting-started.md#setting-up-check-plugins) are installed.
156 This constant is used by a number of
157 [built-in check command definitions](7-icinga-template-library.md#plugin-check-commands).
158 * The `NodeName` constant defines your local node name. Should be set to FQDN which is the default
159 if not set. This constant is required for local host configuration, monitoring remote clients and
164 /* The directory which contains the plugins from the Monitoring Plugins project. */
165 const PluginDir = "/usr/lib64/nagios/plugins"
168 /* The directory which contains the Manubulon plugins.
169 * Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details.
171 const ManubulonPluginDir = "/usr/lib64/nagios/plugins"
173 /* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
174 * This should be the common name from the API certificate.
176 //const NodeName = "localhost"
178 /* Our local zone name. */
179 const ZoneName = NodeName
181 /* Secret key for remote node tickets */
182 const TicketSalt = ""
184 The `ZoneName` and `TicketSalt` constants are required for remote client
185 and distributed setups only.
187 ### <a id="conf-d"></a> The conf.d Directory
189 This directory contains example configuration which should help you get started
190 with monitoring the local host and its services. It is included in the
191 [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) configuration file by default.
193 It can be used as reference example for your own configuration strategy.
194 Just keep in mind to include the main directories in the
195 [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) file.
197 You are certainly not bound to it. Remove it, if you prefer your own
198 way of deploying Icinga 2 configuration.
200 Further details on configuration best practice and how to build your
201 own strategy is described in [this chapter](4-configuring-icinga-2.md#configuration-best-practice).
203 Available configuration files which are installed by default:
205 * [hosts.conf](4-configuring-icinga-2.md#hosts-conf)
206 * [services.conf](4-configuring-icinga-2.md#services-conf)
207 * [users.conf](4-configuring-icinga-2.md#users-conf)
208 * [notifications.conf](4-configuring-icinga-2.md#notifications-conf)
209 * [commands.conf](4-configuring-icinga-2.md#commands-conf)
210 * [groups.conf](4-configuring-icinga-2.md#groups-conf)
211 * [templates.conf](4-configuring-icinga-2.md#templates-conf)
212 * [downtimes.conf](4-configuring-icinga-2.md#downtimes-conf)
213 * [timeperiods.conf](4-configuring-icinga-2.md#timeperiods-conf)
214 * [satellite.conf](4-configuring-icinga-2.md#satellite-conf)
216 #### <a id="hosts-conf"></a> hosts.conf
218 The `hosts.conf` file contains an example host based on your
219 `NodeName` setting in [constants.conf](4-configuring-icinga-2.md#constants-conf). You
220 can use global constants for your object names instead of string
223 The `import` keyword is used to import the `generic-host` template which
224 takes care of setting up the host check command to `hostalive`. If you
225 require a different check command, you can override it in the object definition.
227 The `vars` attribute can be used to define custom attributes which are available
228 for check and notification commands. Most of the [Plugin Check Commands](7-icinga-template-library.md#plugin-check-commands)
229 in the Icinga Template Library require an `address` attribute.
231 The custom attribute `os` is evaluated by the `linux-servers` group in
232 [groups.conf](4-configuring-icinga-2.md#groups-conf) making the local host a member.
234 The example host will show you how to
236 * define http vhost attributes for the `http` service apply rule defined
237 in [services.conf](4-configuring-icinga-2.md#services-conf).
238 * define disks (all, specific `/`) and their attributes for the `disk`
239 service apply rule defined in [services.conf](4-configuring-icinga-2.md#services-conf).
240 * define notification types (`mail`) and set the groups attribute. This
241 will be used by notification apply rules in [notifications.conf](notifications-conf).
243 If you've installed [Icinga Web 2](2-getting-started.md#setting-up-icingaweb2) you can
244 uncomment the http vhost attributes and reload Icinga 2. The apply
245 rules in [services.conf](4-configuring-icinga-2.md#services-conf) will automatically
246 generate a new service checking the `/icingaweb2` URI using the `http`
250 * Host definitions with object attributes
251 * used for apply rules for Service, Notification,
252 * Dependency and ScheduledDowntime objects.
254 * Tip: Use `icinga2 object list --type Host` to
255 * list all host objects after running
256 * configuration validation (`icinga2 daemon -C`).
260 * This is an example host based on your
261 * local host's FQDN. Specify the NodeName
262 * constant in `constants.conf` or use your
263 * own description, e.g. "db-host-1".
266 object Host NodeName {
267 /* Import the default host template defined in `templates.conf`. */
268 import "generic-host"
270 /* Specify the address attributes for checks e.g. `ssh` or `http`. */
271 address = "127.0.0.1"
274 /* Set custom attribute `os` for hostgroup assignment in `groups.conf`. */
277 /* Define http vhost attributes for service apply rules in `services.conf`. */
278 vars.http_vhosts["http"] = {
281 /* Uncomment if you've sucessfully installed Icinga Web 2. */
282 //vars.http_vhosts["Icinga Web 2"] = {
283 // http_uri = "/icingaweb2"
286 /* Define disks and attributes for service apply rules in `services.conf`. */
287 vars.disks["disk"] = {
290 vars.disks["disk /"] = {
291 disk_partitions = "/"
294 /* Define notification mail attributes for notification apply rules in `notifications.conf`. */
295 vars.notification["mail"] = {
296 /* The UserGroup `icingaadmins` is defined in `users.conf`. */
297 groups = [ "icingaadmins" ]
301 This is only the host object definition. Now we'll need to make sure that this
302 host and your additional hosts are getting [services](4-configuring-icinga-2.md#services-conf) applied.
306 > If you don't understand all the attributes and how to use [apply rules](19-language-reference.md#apply)
307 > don't worry - the [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter will explain
310 #### <a id="services-conf"></a> services.conf
312 These service [apply rules](19-language-reference.md#apply) will show you how to monitor
313 the local host, but also allow you to re-use or modify them for
314 your own requirements.
316 You should define all your service apply rules in `services.conf`
317 or any other central location keeping them organized.
319 By default, the local host will be monitored by the following services
321 Service(s) | Applied on host(s)
322 --------------------------------------------|------------------------
323 `load`, `procs`, `swap`, `users`, `icinga` | The `NodeName` host only
324 `ping4`, `ping6` | All hosts with `address` resp. `address6` attribute
325 `ssh` | All hosts with `address` and `vars.os` set to `Linux`
326 `http`, optional: `Icinga Web 2` | All hosts with custom attribute `http_vhosts` defined as dictionary
327 `disk`, `disk /` | All hosts with custom attribute `disks` defined as dictionary
329 The Debian packages also include an additional `apt` service check applied to the local host.
331 The command object `icinga` for the embedded health check is provided by the
332 [Icinga Template Library (ITL)](7-icinga-template-library.md#icinga-template-library) while `http_ip`, `ssh`, `load`, `processes`,
333 `users` and `disk` are all provided by the [Plugin Check Commands](7-icinga-template-library.md#plugin-check-commands)
334 which we enabled earlier by including the `itl` and `plugins` configuration file.
337 Example `load` service apply rule:
339 apply Service "load" {
340 import "generic-service"
342 check_command = "load"
344 /* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
345 vars.backup_downtime = "02:00-03:00"
347 assign where host.name == NodeName
350 The `apply` keyword can be used to create new objects which are associated with
351 another group of objects. You can `import` existing templates, define (custom)
354 The custom attribe `backup_downtime` is defined to a specific timerange string.
355 This variable value will be used for applying a `ScheduledDowntime` object to
356 these services in [downtimes.conf](4-configuring-icinga-2.md#downtimes-conf).
358 In this example the `assign where` condition is a boolean expression which is
359 evaluated for all objects of type `Host` and a new service with name "load"
360 is created for each matching host. [Expression operators](19-language-reference.md#expression-operators)
361 may be used in `assign where` conditions.
363 Multiple `assign where` condition can be combined with `AND` using the `&&` operator
364 as shown in the `ssh` example:
366 apply Service "ssh" {
367 import "generic-service"
369 check_command = "ssh"
371 assign where host.address && host.vars.os == "Linux"
374 In this example, the service `ssh` is applied to all hosts having the `address`
375 attribute defined `AND` having the custom attribute `os` set to the string
377 You can modify this condition to match multiple expressions by combinding `AND`
378 and `OR` using `&&` and `||` [operators](19-language-reference.md#expression-operators), for example
379 `assign where host.address && (vars.os == "Linux" || vars.os == "Windows")`.
382 A more advanced example is shown by the `http` and `disk` service apply
383 rules. While one `apply` rule for `ssh` will only create a service for matching
384 hosts, you can go one step further: Generate apply rules based on array items
385 or dictionary key-value pairs.
387 The idea is simple: Your host in [hosts.conf](4-configuring-icinga-2.md#hosts-conf) defines the
388 `disks` dictionary as custom attribute in `vars`.
390 Remember the example from [hosts.conf](4-configuring-icinga-2.md#hosts-conf):
394 /* Define disks and attributes for service apply rules in `services.conf`. */
395 vars.disks["disk"] = {
398 vars.disks["disk /"] = {
404 This dictionary contains multiple service names we want to monitor. `disk`
405 should just check all available disks, while `disk /` will pass an additional
406 parameter `disk_partition` to the check command.
408 You'll recognize that the naming is important - that's the very same name
409 as it is passed from a service to a check command argument. Read about services
410 and passing check commands in [this chapter](3-monitoring-basics.md#command-passing-parameters).
412 Using `apply Service for` omits the service name, it will take the key stored in
413 the `disk` variable in `key => config` as new service object name.
415 The `for` keyword expects a loop definition, for example `key => value in dictionary`
416 as known from Perl and other scripting languages.
418 Once defined like this, the `apply` rule defined below will do the following:
420 * only match hosts with `host.vars.disks` defined through the `assign where` condition
421 * loop through all entries in the `host.vars.disks` dictionary. That's `disk` and `disk /` as keys.
422 * call `apply` on each, and set the service object name from the provided key
423 * inside apply, the `generic-service` template is imported
424 * defining the [disk](7-icinga-template-library.md#plugin-check-command-disk) check command requiring command arguments like `disk_partition`
425 * adding the `config` dictionary items to `vars`. Simply said, there's now `vars.disk_partition` defined for the
428 Configuration example:
430 apply Service for (disk => config in host.vars.disks) {
431 import "generic-service"
433 check_command = "disk"
438 A similar example is used for the `http` services. That way you can make your
439 host the information provider for all apply rules. Define them once, and only
442 Look into [notifications.conf](4-configuring-icinga-2.md#notifications-conf) how this technique is used
443 for applying notifications to hosts and services using their type and user
446 Don't forget to install the [check plugins](2-getting-started.md#setting-up-check-plugins) required by
447 the hosts and services and their check commands.
449 Further details on the monitoring configuration can be found in the
450 [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter.
452 #### <a id="users-conf"></a> users.conf
454 Defines the `icingaadmin` User and the `icingaadmins` UserGroup. The latter is used in
455 [hosts.conf](4-configuring-icinga-2.md#hosts-conf) for defining a custom host attribute later used in
456 [notifications.conf](4-configuring-icinga-2.md#notifications-conf) for notification apply rules.
458 object User "icingaadmin" {
459 import "generic-user"
461 display_name = "Icinga 2 Admin"
462 groups = [ "icingaadmins" ]
464 email = "icinga@localhost"
467 object UserGroup "icingaadmins" {
468 display_name = "Icinga 2 Admin Group"
472 #### <a id="notifications-conf"></a> notifications.conf
474 Notifications for check alerts are an integral part or your
475 Icinga 2 monitoring stack.
477 The examples in this file define two notification apply rules for hosts and services.
478 Both `apply` rules match on the same condition: They are only applied if the
479 nested dictionary attribute `notification.mail` is set.
481 Please note that the `to` keyword is important in [notification apply rules](3-monitoring-basics.md#using-apply-notifications)
482 defining whether these notifications are applies to hosts or services.
483 The `import` keyword imports the specific mail templates defined in [templates.conf](4-configuring-icinga-2.md#templates-conf).
485 The `interval` attribute is not explicitly set - it [defaults to 30 minutes](6-object-types.md#objecttype-notification).
487 By setting the `user_groups` to the value provided by the
488 respective [host.vars.notification.mail](4-configuring-icinga-2.md#hosts-conf) attribute we'll
489 implicitely use the `icingaadmins` UserGroup defined in [users.conf](4-configuring-icinga-2.md#users-conf).
491 apply Notification "mail-icingaadmin" to Host {
492 import "mail-host-notification"
494 user_groups = host.vars.notification.mail.groups
495 users = host.vars.notification.mail.users
497 assign where host.vars.notification.mail
500 apply Notification "mail-icingaadmin" to Service {
501 import "mail-service-notification"
503 user_groups = host.vars.notification.mail.groups
504 users = host.vars.notification.mail.users
506 assign where host.vars.notification.mail
509 More details on defining notifications and their additional attributes such as
510 filters can be read in [this chapter](3-monitoring-basics.md#notifications).
512 ### <a id="commands-conf"></a> commands.conf
514 This is the place where your own command configuration can be defined. By default
515 only the notification commands used by the notification templates defined in [templates.conf](4-configuring-icinga-2.md#templates-conf).
517 You can freely customize these notification commands, and adapt them for your needs.
518 Read more on that topic [here](3-monitoring-basics.md#notification-commands).
520 ### <a id="groups-conf"></a> groups.conf
522 The example host defined in [hosts.conf](hosts-conf) already has the
523 custom attribute `os` set to `Linux` and is therefore automatically
524 a member of the host group `linux-servers`.
526 This is done by using the [group assign](19-language-reference.md#group-assign) expressions similar
527 to previously seen [apply rules](3-monitoring-basics.md#using-apply).
529 object HostGroup "linux-servers" {
530 display_name = "Linux Servers"
532 assign where host.vars.os == "Linux"
535 object HostGroup "windows-servers" {
536 display_name = "Windows Servers"
538 assign where host.vars.os == "Windows"
541 Service groups can be grouped together by similar pattern matches.
542 The [match() function](19-language-reference.md#function-calls) expects a wildcard match string
543 and the attribute string to match with.
545 object ServiceGroup "ping" {
546 display_name = "Ping Checks"
548 assign where match("ping*", service.name)
551 object ServiceGroup "http" {
552 display_name = "HTTP Checks"
554 assign where match("http*", service.check_command)
557 object ServiceGroup "disk" {
558 display_name = "Disk Checks"
560 assign where match("disk*", service.check_command)
564 #### <a id="templates-conf"></a> templates.conf
566 Most of the example configuration objects use generic global templates by
569 template Host "generic-host" {
570 max_check_attempts = 5
574 check_command = "hostalive"
577 template Service "generic-service" {
578 max_check_attempts = 3
583 The `hostalive` check command is part of the
584 [Plugin Check Commands](7-icinga-template-library.md#plugin-check-commands).
587 template Notification "mail-host-notification" {
588 command = "mail-host-notification"
590 states = [ Up, Down ]
591 types = [ Problem, Acknowledgement, Recovery, Custom,
592 FlappingStart, FlappingEnd,
593 DowntimeStart, DowntimeEnd, DowntimeRemoved ]
598 template Notification "mail-service-notification" {
599 command = "mail-service-notification"
601 states = [ OK, Warning, Critical, Unknown ]
602 types = [ Problem, Acknowledgement, Recovery, Custom,
603 FlappingStart, FlappingEnd,
604 DowntimeStart, DowntimeEnd, DowntimeRemoved ]
609 More details on `Notification` object attributes can be found [here](6-object-types.md#objecttype-notification).
612 #### <a id="downtimes-conf"></a> downtimes.conf
614 The `load` service apply rule defined in [services.conf](4-configuring-icinga-2.md#services-conf) defines
615 the `backup_downtime` custom attribute.
617 The [ScheduledDowntime](6-object-types.md#objecttype-scheduleddowntime) apply rule uses this attribute
618 to define the default value for the time ranges required for recurring downtime slots.
620 apply ScheduledDowntime "backup-downtime" to Service {
621 author = "icingaadmin"
622 comment = "Scheduled downtime for backup"
625 monday = service.vars.backup_downtime
626 tuesday = service.vars.backup_downtime
627 wednesday = service.vars.backup_downtime
628 thursday = service.vars.backup_downtime
629 friday = service.vars.backup_downtime
630 saturday = service.vars.backup_downtime
631 sunday = service.vars.backup_downtime
634 assign where service.vars.backup_downtime != ""
638 #### <a id="timeperiods-conf"></a> timeperiods.conf
640 This file contains the default timeperiod definitions for `24x7`, `9to5`
641 and `never`. Timeperiod objects are referenced by `*period`
642 objects such as hosts, services or notifications.
645 #### <a id="satellite-conf"></a> satellite.conf
647 Includes default templates and dependencies for
648 [monitoring remote clients](10-icinga2-client.md#icinga2-client)
649 using service discovery and
650 [config generation](10-icinga2-client.md#icinga2-remote-monitoring-master-discovery)
651 on the master. Can be ignored/removed on setups not using this features.
654 Further details on the monitoring configuration can be found in the
655 [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter.