By default the [InfluxdbWriter](09-object-types.md#objecttype-influxdbwriter) feature
expects the InfluxDB daemon to listen at `127.0.0.1` on port `8086`.
+Measurement names and tags are fully configurable by the end user. The InfluxdbWriter
+object will automatically add a `metric` tag to each data point. This correlates to the
+perfdata label. Fields (value, warn, crit, min, max, unit) are created from data if available
+and the configuration allows it. If a value associated with a tag is not able to be
+resolved, it will be dropped and not sent to the target host.
+
+Backslashes are allowed in tag keys, tag values and field keys, however they are also
+escape characters when followed by a space or comma, but cannot be escaped themselves.
+As a result all trailling slashes in these fields are replaced with an underscore. This
+predominantly affects Windows paths e.g. `C:\` becomes `C:_`.
+
+The database is assumed to exist so this object will make no attempt to create it currently.
+
More configuration details can be found [here](09-object-types.md#objecttype-influxdbwriter).
+#### Instance Tagging <a id="influxdb-writer-instance-tags"></a>
+
+Consider the following service check:
+
+```
+apply Service "disk" for (disk => attributes in host.vars.disks) {
+ import "generic-service"
+ check_command = "disk"
+ display_name = "Disk " + disk
+ vars.disk_partitions = disk
+ assign where host.vars.disks
+}
+```
+
+This is a typical pattern for checking individual disks, NICs, SSL certificates etc associated
+with a host. What would be useful is to have the data points tagged with the specific instance
+for that check. This would allow you to query time series data for a check on a host and for a
+specific instance e.g. /dev/sda. To do this quite simply add the instance to the service variables:
+
+```
+apply Service "disk" for (disk => attributes in host.vars.disks) {
+ ...
+ vars.instance = disk
+ ...
+}
+```
+
+Then modify your writer configuration to add this tag to your data points if the instance variable
+is associated with the service:
+
+```
+object InfluxdbWriter "influxdb" {
+ ...
+ service_template = {
+ measurement = "$service.check_command$"
+ tags = {
+ hostname = "$host.name$"
+ service = "$service.name$"
+ instance = "$service.vars.instance$"
+ }
+ }
+ ...
+}
+```
+
### Elastic Stack Integration <a id="elastic-stack-integration"></a>
[Icingabeat](https://github.com/icinga/icingabeat) is an Elastic Beat that fetches data
* [Logstash output](https://github.com/Icinga/logstash-output-icinga) for the Icinga 2 API.
* [Logstash Grok Pattern](https://github.com/Icinga/logstash-grok-pattern) for Icinga 2 logs.
-#### Elastic Writer <a id="elastic-writer"></a>
+#### Elasticsearch Writer <a id="elasticsearch-writer"></a>
This feature forwards check results, state changes and notification events
to an [Elasticsearch](https://www.elastic.co/products/elasticsearch) installation over its HTTP API.
> **Note**
>
-> Elasticsearch 5.x+ is required.
+> Elasticsearch 5.x is required. This feature has been successfully tested with Elasticsearch 5.6.4.
Enable the feature and restart Icinga 2.
```
-# icinga2 feature enable elastic
+# icinga2 feature enable elasticsearch
```
The default configuration expects an Elasticsearch instance running on `localhost` on port `9200
and writes to an index called `icinga2`.
-More configuration details can be found [here](09-object-types.md#objecttype-elasticwriter).
+More configuration details can be found [here](09-object-types.md#objecttype-elasticsearchwriter).
#### Current Elasticsearch Schema <a id="elastic-writer-schema"></a>
implements a query protocol that lets users query their Icinga instance for
status information. It can also be used to send commands.
-> **Tip**
->
-> Only install the Livestatus feature if your web interface or addon requires
-> you to do so (for example, [Icinga Web 2](02-getting-started.md#setting-up-icingaweb2)).
-> Icinga Classic UI 1.x and Icinga Web 1.x do not use Livestatus as backend.
-
The Livestatus component that is distributed as part of Icinga 2 is a
re-implementation of the Livestatus protocol which is compatible with MK
Livestatus.
+> **Tip**
+>
+> Only install the Livestatus feature if your web interface or addon requires
+> you to do so.
+> [Icinga Web 2](02-getting-started.md#setting-up-icingaweb2) does not need
+> Livestatus.
+
Details on the available tables and attributes with Icinga 2 can be found
in the [Livestatus Schema](24-appendix.md#schema-livestatus) section.
After that you will have to restart Icinga 2:
-RHEL/CentOS 7/Fedora, SLES 12, Debian Jessie/Stretch, Ubuntu Xenial:
-
# systemctl restart icinga2
-Debian/Ubuntu, RHEL/CentOS 6 and SUSE:
-
- # service icinga2 restart
-
By default the Livestatus socket is available in `/var/run/icinga2/cmd/livestatus`.
In order for queries and commands to work you will need to add your query user
# icinga2 feature enable statusdata
-Icinga 1.x Classic UI requires this data set as part of its backend.
-
-> **Note**
->
-> If you are not using any web interface or addon which uses these files,
-> you can safely disable this feature.
+If you are not using any web interface or addon which uses these files,
+you can safely disable this feature.
## Compat Log Files <a id="compat-logging"></a>
The Icinga 1.x log format is considered being the `Compat Log`
in Icinga 2 provided with the `CompatLogger` object.
-These logs are not only used for informational representation in
+These logs are used for informational representation in
external web interfaces parsing the logs, but also to generate
-SLA reports and trends in Icinga 1.x Classic UI. Furthermore the
-[Livestatus](14-features.md#setting-up-livestatus) feature uses these logs for answering queries to
-historical tables.
+SLA reports and trends.
+The [Livestatus](14-features.md#setting-up-livestatus) feature uses these logs
+for answering queries to historical tables.
The `CompatLogger` object can be enabled with
in `/var/log/icinga2/compat`. Rotated log files are moved into
`var/log/icinga2/compat/archives`.
-The format cannot be changed without breaking compatibility to
-existing log parsers.
-
- # tail -f /var/log/icinga2/compat/icinga.log
-
- [1382115688] LOG ROTATION: HOURLY
- [1382115688] LOG VERSION: 2.0
- [1382115688] HOST STATE: CURRENT;localhost;UP;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;disk;WARNING;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;http;OK;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;load;OK;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;ping4;OK;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;ping6;OK;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;processes;WARNING;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;ssh;OK;HARD;1;
- [1382115688] SERVICE STATE: CURRENT;localhost;users;OK;HARD;1;
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;disk;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;http;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;load;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;ping6;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;processes;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;ssh;1382115705
- [1382115706] EXTERNAL COMMAND: SCHEDULE_FORCED_SVC_CHECK;localhost;users;1382115705
- [1382115731] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;localhost;ping6;2;critical test|
- [1382115731] SERVICE ALERT: localhost;ping6;CRITICAL;SOFT;2;critical test
-
-
## Check Result Files <a id="check-result-files"></a>
Icinga 1.x writes its check result files to a temporary spool directory