As in my previous post for winlogbeat I covered how to split logs using fields to different kafka topics using dynamic values, in this post I will cover the same thing but using metricbeat.

So again, lets say you want to direct different kinds of metrics to various topics / indices, we have the ability to add fields within the various modules.d *.yml files

So first the metricbeat.yml config needs to look something like this (add the username and password to the kakfa output if needed)

---
# modules

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# Outputs

output.kafka:
  hosts:
    - saveme-1.vman.ch:6969
    - saveme-2.vman.ch:6969
    - saveme-3.vman.ch:6969
  topic: "%{[kafka_topic]}"

  ssl.enabled: true

# Logging

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: warning

Then within the system.yml

---
- module: system
  period: 10s
  metricsets:
    - uptime
    - filesystem
    - fsstat
    - cpu
    - core
    - diskio
    - memory
    - network
    - process
    - process_summary
    - socket_summary
  filesystem.ignore_types: [unknown, unavailable] #ignore CD drives, etc
  cpu.metrics: [percentages, normalized_percentages]
  fields:
    kafka_topic: metrics-windows-system
  fields_under_root: true

Here is another example for iis.yml

---
- module: iis
  metricsets:
    - webserver
    - website
    - application_pool
  enabled: true
  period: 10s
  fields:
    kafka_topic: metrics-windows-iis
  fields_under_root: true

This can be repeated multiple times in all the various .yml files for each module, this way when they hit the kafka output they will be send accordingly. Another great use case / advanced option is when using the sql.yml, you can have multiple queries with various outputs to ends up in different topics / indicies.

---
- module: sql
  metricsets:
    - query
  period: 5m
  hosts: ["sqlserver://localhost/vman"]
  username: superduperuser
  password: superduperusersecurepassword
  driver: "mssql"
  merge_results: false
  sql_queries:
    - query: "SELECT * FROM table1"
      response_format: table
  fields:
    kafka_topic: metrics-windows-mssql
    query_name: vmantable1query1
  fields_under_root: true

- module: sql
  metricsets:
    - query
  period: 30s
  hosts: ["sqlserver://localhost/vman"]
  username: superduperuser2
  password: superduperusersecurepasswordforquery2
  driver: "mssql"
  merge_results: false
  sql_queries:
    - query: "SELECT * FROM table2"
      response_format: table
  fields:
    kafka_topic: metrics-windows-mssql
    query_name: vmantable2query1
  fields_under_root: true

- module: sql
  metricsets:
    - query
  period: 1m
  hosts: ["sqlserver://localhost/vman"]
  username: superduperuser3
  password: superduperusersecurepasswordforquery2
  driver: "mssql"
  merge_results: false
  sql_queries:
    - query: "SELECT * FROM table3"
      response_format: table
  fields:
    kafka_topic: metrics-windows-mssql
    query_name: vmantable3query1
  fields_under_root: true

By adding query_name as an additional field it makes it easier to filter in Kibana.

Hope you found this helpful.

vMan