1 # <a id="configuring-icinga2-first-steps"></a> Configuring Icinga 2: First Steps
3 This chapter provides an introduction into best practices with your Icinga 2 configuration.
4 The configuration files which are automatically created when installing the Icinga 2 packages
5 are a good way to start with Icinga 2.
7 The [Language Reference](17-language-reference.md#language-reference) chapter explains details
8 on value types (string, number, dictionaries, etc.) and the general configuration syntax.
10 ## <a id="configuration-best-practice"></a> Configuration Best Practice
12 If you are ready to configure additional hosts, services, notifications,
13 dependencies, etc., you should think about the requirements first and then
14 decide for a possible strategy.
16 There are many ways of creating Icinga 2 configuration objects:
18 * Manually with your preferred editor, for example vi(m), nano, notepad, etc.
19 * A configuration tool for Icinga 2 e.g. the [Icinga Director](https://github.com/Icinga/icingaweb2-module-director)
20 * Generated by a [configuration management tool](13-addons.md#configuration-tools) such as Puppet, Chef, Ansible, etc.
21 * A custom exporter script from your CMDB or inventory tool
24 Find the best strategy for your own configuration and ask yourself the following questions:
26 * Do your hosts share a common group of services (for example linux hosts with disk, load, etc. checks)?
27 * Only a small set of users receives notifications and escalations for all hosts/services?
29 If you can at least answer one of these questions with yes, look for the
30 [apply rules](3-monitoring-basics.md#using-apply) logic instead of defining objects on a per
31 host and service basis.
33 * You are required to define specific configuration for each host/service?
34 * Does your configuration generation tool already know about the host-service-relationship?
36 Then you should look for the object specific configuration setting `host_name` etc. accordingly.
38 You decide on the "best" layout for configuration files and directories. Ensure that
39 the [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) configuration file includes them.
43 * tree-based on locations, host groups, specific host attributes with sub levels of directories.
44 * flat `hosts.conf`, `services.conf`, etc. files for rule based configuration.
45 * generated configuration with one file per host and a global configuration for groups, users, etc.
46 * one big file generated from an external application (probably a bad idea for maintaining changes).
49 In either way of choosing the right strategy you should additionally check the following:
51 * Are there any specific attributes describing the host/service you could set as `vars` custom attributes?
52 You can later use them for applying assign/ignore rules, or export them into external interfaces.
53 * Put hosts into hostgroups, services into servicegroups and use these attributes for your apply rules.
54 * Use templates to store generic attributes for your objects and apply rules making your configuration more readable.
55 Details can be found in the [using templates](3-monitoring-basics.md#object-inheritance-using-templates) chapter.
56 * Apply rules may overlap. Keep a central place (for example, [services.conf](4-configuring-icinga-2.md#services-conf) or [notifications.conf](4-configuring-icinga-2.md#notifications-conf)) storing
57 the configuration instead of defining apply rules deep in your configuration tree.
58 * Every plugin used as check, notification or event command requires a `Command` definition.
59 Further details can be looked up in the [check commands](3-monitoring-basics.md#check-commands) chapter.
61 If you are planning to use a distributed monitoring setup with master, satellite and client installations
62 take the configuration location into account too. Everything configured on the master, synced to all other
63 nodes? Or any specific local configuration (e.g. health checks)?
65 There is a detailed chapter on [distributed monitoring scenarios](6-distributed-monitoring.md#distributed-monitoring-scenarios).
66 Please ensure to have read the [introduction](6-distributed-monitoring.md#distributed-monitoring) at first glance.
68 If you happen to have further questions, do not hesitate to join the
69 [community support channels](https://www.icinga.com/community/get-involved/)
70 and ask community members for their experience and best practices.
72 ## <a id="your-configuration"></a> Your Configuration
74 If you prefer to organize your own local object tree, you can also remove
75 `include_recursive "conf.d"` from your icinga2.conf file.
77 Create a new configuration directory, e.g. `objects.d` and include it
78 in your icinga2.conf file.
80 [root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/objects.d
82 [root@icinga2-master1.localdomain /]# vim /etc/icinga2/icinga2.conf
84 /* Local object configuration on our master instance. */
85 include_recursive "objects.d"
87 This approach is used by the [Icinga 2 Puppet module](https://github.com/Icinga/puppet-icinga2).
89 If you plan to setup a distributed setup with HA clusters and clients, please refer to [this chapter](#6-distributed-monitoring.md#distributed-monitoring-top-down)
90 for examples with `zones.d` as configuration directory.
92 ## <a id="configuring-icinga2-overview"></a> Configuration Overview
94 ### <a id="icinga2-conf"></a> icinga2.conf
96 An example configuration file is installed for you in `/etc/icinga2/icinga2.conf`.
98 Here's a brief description of the example configuration:
101 * Icinga 2 configuration file
102 * -- this is where you define settings for the Icinga application including
103 * which hosts/services to check.
105 * For an overview of all available configuration options please refer
106 * to the documentation that is distributed as part of Icinga 2.
109 Icinga 2 supports [C/C++-style comments](17-language-reference.md#comments).
112 * The constants.conf defines global constants.
114 include "constants.conf"
116 The `include` directive can be used to include other files.
119 * The zones.conf defines zones for a cluster setup.
120 * Not required for single instance setups.
124 The [Icinga Template Library](10-icinga-template-library.md#icinga-template-library) provides a set of common templates
125 and [CheckCommand](3-monitoring-basics.md#check-commands) definitions.
128 * The Icinga Template Library (ITL) provides a number of useful templates
129 * and command definitions.
130 * Common monitoring plugin command definitions are included separately.
134 include <plugins-contrib>
138 * This includes the Icinga 2 Windows plugins. These command definitions
139 * are required on a master node when a client is used as command endpoint.
141 include <windows-plugins>
144 * This includes the NSClient++ check commands. These command definitions
145 * are required on a master node when a client is used as command endpoint.
150 * The features-available directory contains a number of configuration
151 * files for features which can be enabled and disabled using the
152 * icinga2 feature enable / icinga2 feature disable CLI commands.
153 * These commands work by creating and removing symbolic links in
154 * the features-enabled directory.
156 include "features-enabled/*.conf"
158 This `include` directive takes care of including the configuration files for all
159 the features which have been enabled with `icinga2 feature enable`. See
160 [Enabling/Disabling Features](11-cli-commands.md#enable-features) for more details.
163 * The repository.d directory contains all configuration objects
164 * managed by the 'icinga2 repository' CLI commands.
166 include_recursive "repository.d"
168 This `include_recursive` directive is used for discovery of services on remote clients
169 and their generated configuration described in
170 [this chapter](6-distributed-monitoring.md#distributed-monitoring-bottom-up).
172 **Note**: This has been DEPRECATED in Icinga 2 v2.6 and is **not** required for
173 satellites and clients using the [top down approach](#6-distributed-monitoring.md#distributed-monitoring-top-down).
174 You can safely disable/remove it.
178 * Although in theory you could define all your objects in this file
179 * the preferred way is to create separate directories and files in the conf.d
180 * directory. Each of these files must have the file extension ".conf".
182 include_recursive "conf.d"
184 You can put your own configuration files in the [conf.d](4-configuring-icinga-2.md#conf-d) directory. This
185 directive makes sure that all of your own configuration files are included.
187 ### <a id="constants-conf"></a> constants.conf
189 The `constants.conf` configuration file can be used to define global constants.
191 By default, you need to make sure to set these constants:
193 * The `PluginDir` constant must be set to the path where the [Monitoring Project plugins](2-getting-started.md#setting-up-check-plugins) are installed.
194 This constant is used by a number of
195 [built-in check command definitions](10-icinga-template-library.md#plugin-check-commands).
196 * The `NodeName` constant defines your local node name. Should be set to FQDN which is the default
197 if not set. This constant is required for local host configuration, monitoring remote clients and
202 /* The directory which contains the plugins from the Monitoring Plugins project. */
203 const PluginDir = "/usr/lib64/nagios/plugins"
205 /* The directory which contains the Manubulon plugins.
206 * Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details.
208 const ManubulonPluginDir = "/usr/lib64/nagios/plugins"
210 /* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
211 * This should be the common name from the API certificate.
213 //const NodeName = "localhost"
215 /* Our local zone name. */
216 const ZoneName = NodeName
218 /* Secret key for remote node tickets */
219 const TicketSalt = ""
221 The `ZoneName` and `TicketSalt` constants are required for remote client
222 and distributed setups only.
224 ### <a id="zones-conf"></a> zones.conf
226 This file can be used to specify the required [Zone](9-object-types.md#objecttype-zone)
227 and [Endpoint](9-object-types.md#objecttype-endpoint) configuration object for
228 [distributed monitoring](6-distributed-monitoring.md#distributed-monitoring).
230 By default the `NodeName` and `ZoneName` [constants](4-configuring-icinga-2.md#constants-conf) will be used.
232 It also contains several [global zones](6-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
233 for distributed monitoring environments.
235 Please ensure to modify this configuration with real names i.e. use the FQDN
236 mentioned in [this chapter](6-distributed-monitoring.md#distributed-monitoring-conventions)
237 for your `Zone` and `Endpoint` object names.
239 ### <a id="conf-d"></a> The conf.d Directory
241 This directory contains **example configuration** which should help you get started
242 with monitoring the local host and its services. It is included in the
243 [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) configuration file by default.
245 It can be used as reference example for your own configuration strategy.
246 Just keep in mind to include the main directories in the
247 [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) file.
251 > You can remove the include directive in [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf)
252 > if you prefer your own way of deploying Icinga 2 configuration.
254 Further details on configuration best practice and how to build your
255 own strategy is described in [this chapter](4-configuring-icinga-2.md#configuration-best-practice).
257 Available configuration files which are installed by default:
259 * [hosts.conf](4-configuring-icinga-2.md#hosts-conf)
260 * [services.conf](4-configuring-icinga-2.md#services-conf)
261 * [users.conf](4-configuring-icinga-2.md#users-conf)
262 * [notifications.conf](4-configuring-icinga-2.md#notifications-conf)
263 * [commands.conf](4-configuring-icinga-2.md#commands-conf)
264 * [groups.conf](4-configuring-icinga-2.md#groups-conf)
265 * [templates.conf](4-configuring-icinga-2.md#templates-conf)
266 * [downtimes.conf](4-configuring-icinga-2.md#downtimes-conf)
267 * [timeperiods.conf](4-configuring-icinga-2.md#timeperiods-conf)
268 * [satellite.conf](4-configuring-icinga-2.md#satellite-conf)
269 * [api-users.conf](4-configuring-icinga-2.md#api-users-conf)
270 * [app.conf](4-configuring-icinga-2.md#app-conf)
272 #### <a id="hosts-conf"></a> hosts.conf
274 The `hosts.conf` file contains an example host based on your
275 `NodeName` setting in [constants.conf](4-configuring-icinga-2.md#constants-conf). You
276 can use global constants for your object names instead of string
279 The `import` keyword is used to import the `generic-host` template which
280 takes care of setting up the host check command to `hostalive`. If you
281 require a different check command, you can override it in the object definition.
283 The `vars` attribute can be used to define custom attributes which are available
284 for check and notification commands. Most of the [Plugin Check Commands](10-icinga-template-library.md#plugin-check-commands)
285 in the Icinga Template Library require an `address` attribute.
287 The custom attribute `os` is evaluated by the `linux-servers` group in
288 [groups.conf](4-configuring-icinga-2.md#groups-conf) making the local host a member.
290 The example host will show you how to
292 * define http vhost attributes for the `http` service apply rule defined
293 in [services.conf](4-configuring-icinga-2.md#services-conf).
294 * define disks (all, specific `/`) and their attributes for the `disk`
295 service apply rule defined in [services.conf](4-configuring-icinga-2.md#services-conf).
296 * define notification types (`mail`) and set the groups attribute. This
297 will be used by notification apply rules in [notifications.conf](notifications-conf).
299 If you've installed [Icinga Web 2](2-getting-started.md#setting-up-icingaweb2), you can
300 uncomment the http vhost attributes and reload Icinga 2. The apply
301 rules in [services.conf](4-configuring-icinga-2.md#services-conf) will automatically
302 generate a new service checking the `/icingaweb2` URI using the `http`
306 * Host definitions with object attributes
307 * used for apply rules for Service, Notification,
308 * Dependency and ScheduledDowntime objects.
310 * Tip: Use `icinga2 object list --type Host` to
311 * list all host objects after running
312 * configuration validation (`icinga2 daemon -C`).
316 * This is an example host based on your
317 * local host's FQDN. Specify the NodeName
318 * constant in `constants.conf` or use your
319 * own description, e.g. "db-host-1".
322 object Host NodeName {
323 /* Import the default host template defined in `templates.conf`. */
324 import "generic-host"
326 /* Specify the address attributes for checks e.g. `ssh` or `http`. */
327 address = "127.0.0.1"
330 /* Set custom attribute `os` for hostgroup assignment in `groups.conf`. */
333 /* Define http vhost attributes for service apply rules in `services.conf`. */
334 vars.http_vhosts["http"] = {
337 /* Uncomment if you've sucessfully installed Icinga Web 2. */
338 //vars.http_vhosts["Icinga Web 2"] = {
339 // http_uri = "/icingaweb2"
342 /* Define disks and attributes for service apply rules in `services.conf`. */
343 vars.disks["disk"] = {
346 vars.disks["disk /"] = {
347 disk_partitions = "/"
350 /* Define notification mail attributes for notification apply rules in `notifications.conf`. */
351 vars.notification["mail"] = {
352 /* The UserGroup `icingaadmins` is defined in `users.conf`. */
353 groups = [ "icingaadmins" ]
357 This is only the host object definition. Now we'll need to make sure that this
358 host and your additional hosts are getting [services](4-configuring-icinga-2.md#services-conf) applied.
362 > If you don't understand all the attributes and how to use [apply rules](17-language-reference.md#apply),
363 > don't worry -- the [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter will explain
366 #### <a id="services-conf"></a> services.conf
368 These service [apply rules](17-language-reference.md#apply) will show you how to monitor
369 the local host, but also allow you to re-use or modify them for
370 your own requirements.
372 You should define all your service apply rules in `services.conf`
373 or any other central location keeping them organized.
375 By default, the local host will be monitored by the following services
377 Service(s) | Applied on host(s)
378 --------------------------------------------|------------------------
379 `load`, `procs`, `swap`, `users`, `icinga` | The `NodeName` host only
380 `ping4`, `ping6` | All hosts with `address` resp. `address6` attribute
381 `ssh` | All hosts with `address` and `vars.os` set to `Linux`
382 `http`, optional: `Icinga Web 2` | All hosts with custom attribute `http_vhosts` defined as dictionary
383 `disk`, `disk /` | All hosts with custom attribute `disks` defined as dictionary
385 The Debian packages also include an additional `apt` service check applied to the local host.
387 The command object `icinga` for the embedded health check is provided by the
388 [Icinga Template Library (ITL)](10-icinga-template-library.md#icinga-template-library) while `http_ip`, `ssh`, `load`, `processes`,
389 `users` and `disk` are all provided by the [Plugin Check Commands](10-icinga-template-library.md#plugin-check-commands)
390 which we enabled earlier by including the `itl` and `plugins` configuration file.
393 Example `load` service apply rule:
395 apply Service "load" {
396 import "generic-service"
398 check_command = "load"
400 /* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
401 vars.backup_downtime = "02:00-03:00"
403 assign where host.name == NodeName
406 The `apply` keyword can be used to create new objects which are associated with
407 another group of objects. You can `import` existing templates, define (custom)
410 The custom attribe `backup_downtime` is defined to a specific timerange string.
411 This variable value will be used for applying a `ScheduledDowntime` object to
412 these services in [downtimes.conf](4-configuring-icinga-2.md#downtimes-conf).
414 In this example the `assign where` condition is a boolean expression which is
415 evaluated for all objects of type `Host` and a new service with name "load"
416 is created for each matching host. [Expression operators](17-language-reference.md#expression-operators)
417 may be used in `assign where` conditions.
419 Multiple `assign where` condition can be combined with `AND` using the `&&` operator
420 as shown in the `ssh` example:
422 apply Service "ssh" {
423 import "generic-service"
425 check_command = "ssh"
427 assign where host.address && host.vars.os == "Linux"
430 In this example, the service `ssh` is applied to all hosts having the `address`
431 attribute defined `AND` having the custom attribute `os` set to the string
433 You can modify this condition to match multiple expressions by combinding `AND`
434 and `OR` using `&&` and `||` [operators](17-language-reference.md#expression-operators), for example
435 `assign where host.address && (vars.os == "Linux" || vars.os == "Windows")`.
438 A more advanced example is shown by the `http` and `disk` service apply
439 rules. While one `apply` rule for `ssh` will only create a service for matching
440 hosts, you can go one step further: Generate apply rules based on array items
441 or dictionary key-value pairs.
443 The idea is simple: Your host in [hosts.conf](4-configuring-icinga-2.md#hosts-conf) defines the
444 `disks` dictionary as custom attribute in `vars`.
446 Remember the example from [hosts.conf](4-configuring-icinga-2.md#hosts-conf):
449 /* Define disks and attributes for service apply rules in `services.conf`. */
450 vars.disks["disk"] = {
453 vars.disks["disk /"] = {
459 This dictionary contains multiple service names we want to monitor. `disk`
460 should just check all available disks, while `disk /` will pass an additional
461 parameter `disk_partition` to the check command.
463 You'll recognize that the naming is important -- that's the very same name
464 as it is passed from a service to a check command argument. Read about services
465 and passing check commands in [this chapter](3-monitoring-basics.md#command-passing-parameters).
467 Using `apply Service for` omits the service name, it will take the key stored in
468 the `disk` variable in `key => config` as new service object name.
470 The `for` keyword expects a loop definition, for example `key => value in dictionary`
471 as known from Perl and other scripting languages.
473 Once defined like this, the `apply` rule defined below will do the following:
475 * only match hosts with `host.vars.disks` defined through the `assign where` condition
476 * loop through all entries in the `host.vars.disks` dictionary. That's `disk` and `disk /` as keys.
477 * call `apply` on each, and set the service object name from the provided key
478 * inside apply, the `generic-service` template is imported
479 * defining the [disk](10-icinga-template-library.md#plugin-check-command-disk) check command requiring command arguments like `disk_partition`
480 * adding the `config` dictionary items to `vars`. Simply said, there's now `vars.disk_partition` defined for the
483 Configuration example:
485 apply Service for (disk => config in host.vars.disks) {
486 import "generic-service"
488 check_command = "disk"
493 A similar example is used for the `http` services. That way you can make your
494 host the information provider for all apply rules. Define them once, and only
497 Look into [notifications.conf](4-configuring-icinga-2.md#notifications-conf) how this technique is used
498 for applying notifications to hosts and services using their type and user
501 Don't forget to install the [check plugins](2-getting-started.md#setting-up-check-plugins) required by
502 the hosts and services and their check commands.
504 Further details on the monitoring configuration can be found in the
505 [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter.
507 #### <a id="users-conf"></a> users.conf
509 Defines the `icingaadmin` User and the `icingaadmins` UserGroup. The latter is used in
510 [hosts.conf](4-configuring-icinga-2.md#hosts-conf) for defining a custom host attribute later used in
511 [notifications.conf](4-configuring-icinga-2.md#notifications-conf) for notification apply rules.
513 object User "icingaadmin" {
514 import "generic-user"
516 display_name = "Icinga 2 Admin"
517 groups = [ "icingaadmins" ]
519 email = "icinga@localhost"
522 object UserGroup "icingaadmins" {
523 display_name = "Icinga 2 Admin Group"
527 #### <a id="notifications-conf"></a> notifications.conf
529 Notifications for check alerts are an integral part or your
530 Icinga 2 monitoring stack.
532 The examples in this file define two notification apply rules for hosts and services.
533 Both `apply` rules match on the same condition: They are only applied if the
534 nested dictionary attribute `notification.mail` is set.
536 Please note that the `to` keyword is important in [notification apply rules](3-monitoring-basics.md#using-apply-notifications)
537 defining whether these notifications are applies to hosts or services.
538 The `import` keyword imports the specific mail templates defined in [templates.conf](4-configuring-icinga-2.md#templates-conf).
540 The `interval` attribute is not explicitly set -- it [defaults to 30 minutes](9-object-types.md#objecttype-notification).
542 By setting the `user_groups` to the value provided by the
543 respective [host.vars.notification.mail](4-configuring-icinga-2.md#hosts-conf) attribute we'll
544 implicitely use the `icingaadmins` UserGroup defined in [users.conf](4-configuring-icinga-2.md#users-conf).
546 apply Notification "mail-icingaadmin" to Host {
547 import "mail-host-notification"
549 user_groups = host.vars.notification.mail.groups
550 users = host.vars.notification.mail.users
552 assign where host.vars.notification.mail
555 apply Notification "mail-icingaadmin" to Service {
556 import "mail-service-notification"
558 user_groups = host.vars.notification.mail.groups
559 users = host.vars.notification.mail.users
561 assign where host.vars.notification.mail
564 More details on defining notifications and their additional attributes such as
565 filters can be read in [this chapter](3-monitoring-basics.md#notifications).
567 #### <a id="commands-conf"></a> commands.conf
569 This is the place where your own command configuration can be defined. By default
570 only the notification commands used by the notification templates defined in [templates.conf](4-configuring-icinga-2.md#templates-conf).
572 You can freely customize these notification commands, and adapt them for your needs.
573 Read more on that topic [here](3-monitoring-basics.md#notification-commands).
575 #### <a id="groups-conf"></a> groups.conf
577 The example host defined in [hosts.conf](hosts-conf) already has the
578 custom attribute `os` set to `Linux` and is therefore automatically
579 a member of the host group `linux-servers`.
581 This is done by using the [group assign](17-language-reference.md#group-assign) expressions similar
582 to previously seen [apply rules](3-monitoring-basics.md#using-apply).
584 object HostGroup "linux-servers" {
585 display_name = "Linux Servers"
587 assign where host.vars.os == "Linux"
590 object HostGroup "windows-servers" {
591 display_name = "Windows Servers"
593 assign where host.vars.os == "Windows"
596 Service groups can be grouped together by similar pattern matches.
597 The [match function](18-library-reference.md#global-functions-match) expects a wildcard match string
598 and the attribute string to match with.
600 object ServiceGroup "ping" {
601 display_name = "Ping Checks"
603 assign where match("ping*", service.name)
606 object ServiceGroup "http" {
607 display_name = "HTTP Checks"
609 assign where match("http*", service.check_command)
612 object ServiceGroup "disk" {
613 display_name = "Disk Checks"
615 assign where match("disk*", service.check_command)
619 #### <a id="templates-conf"></a> templates.conf
621 Most of the example configuration objects use generic global templates by
624 template Host "generic-host" {
625 max_check_attempts = 5
629 check_command = "hostalive"
632 template Service "generic-service" {
633 max_check_attempts = 3
638 The `hostalive` check command is part of the
639 [Plugin Check Commands](10-icinga-template-library.md#plugin-check-commands).
642 template Notification "mail-host-notification" {
643 command = "mail-host-notification"
645 states = [ Up, Down ]
646 types = [ Problem, Acknowledgement, Recovery, Custom,
647 FlappingStart, FlappingEnd,
648 DowntimeStart, DowntimeEnd, DowntimeRemoved ]
653 template Notification "mail-service-notification" {
654 command = "mail-service-notification"
656 states = [ OK, Warning, Critical, Unknown ]
657 types = [ Problem, Acknowledgement, Recovery, Custom,
658 FlappingStart, FlappingEnd,
659 DowntimeStart, DowntimeEnd, DowntimeRemoved ]
664 More details on `Notification` object attributes can be found [here](9-object-types.md#objecttype-notification).
667 #### <a id="downtimes-conf"></a> downtimes.conf
669 The `load` service apply rule defined in [services.conf](4-configuring-icinga-2.md#services-conf) defines
670 the `backup_downtime` custom attribute.
672 The [ScheduledDowntime](9-object-types.md#objecttype-scheduleddowntime) apply rule uses this attribute
673 to define the default value for the time ranges required for recurring downtime slots.
675 apply ScheduledDowntime "backup-downtime" to Service {
676 author = "icingaadmin"
677 comment = "Scheduled downtime for backup"
680 monday = service.vars.backup_downtime
681 tuesday = service.vars.backup_downtime
682 wednesday = service.vars.backup_downtime
683 thursday = service.vars.backup_downtime
684 friday = service.vars.backup_downtime
685 saturday = service.vars.backup_downtime
686 sunday = service.vars.backup_downtime
689 assign where service.vars.backup_downtime != ""
693 #### <a id="timeperiods-conf"></a> timeperiods.conf
695 This file contains the default timeperiod definitions for `24x7`, `9to5`
696 and `never`. TimePeriod objects are referenced by `*period`
697 objects such as hosts, services or notifications.
700 #### <a id="satellite-conf"></a> satellite.conf
702 Includes default templates and dependencies for
703 [monitoring remote clients](6-distributed-monitoring.md#distributed-monitoring)
704 using service discovery and
705 [config generation](6-distributed-monitoring.md#distributed-monitoring-bottom-up)
706 on the master. Can be ignored/removed on setups not using this feature.
709 Further details on the monitoring configuration can be found in the
710 [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter.
712 #### <a id="api-users-conf"></a> api-users.conf
714 Provides the default [ApiUser](9-object-types.md#objecttype-apiuser) object
715 named "root" for the [API authentication](12-icinga2-api.md#icinga2-api-authentication).
717 #### <a id="app-conf"></a> app.conf
719 Provides the default [IcingaApplication](9-object-types.md#objecttype-icingaapplication)
720 object named "app" for additional settings such as disabling notifications