Skip to main content
Version: Next

Integration with Elasticsearch

You can integrate NetFlow Optimizer with Elasticsearch by sending data over UDP protocol in JSON forman using Filebeat or Logstash or both.

note

Important: configure NFO output format as JSON

Using Filebeatโ€‹

Filebeat has a small footprint and enables you to ship your flow data to Elasticsearch securely and reliably. Please note that Filebeat cannot add calculated fields at index time, and Logstash can be used with Filebeat if this is required. The steps below describe NFO -> Filebeat -> Elasticsearch - Kibana scenario.

Installation Stepsโ€‹

Install the following components:โ€‹

  1. Filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html
  2. Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html
  3. Kibana: https://www.elastic.co/guide/en/kibana/current/install.html
  4. Make sure all services are running

Configure Filebeatโ€‹

  1. Download nfo_fields.yml and add the nfo_fields.yml file to the filebeats configuration directory (e.g /etc/filebeat). This file contains the NFO field definitions for the template
  2. In the filebeats configuration directory edit the filebeat.yml file - Add after the โ€˜filebeat.inputs:โ€™ line the following:
- type: udp
max_message_size: 10KiB
host: "0.0.0.0:5514"
  where 5514 is the filebeat input port (it should match NFO UDP output port)
  1. After the โ€˜setup.template.settings:โ€™ line add the following:
setup.template.enabled: true
setup.template.name: "nfo"
setup.template.pattern: "nfo-*"
setup.template.fields: "nfo_fields.yml"
setup.template.overwrite: true
#if ilm.enabled is set to false then in the outputs a custom index name can be specified
setup.ilm.enabled: false
  1. In the โ€˜output.elasticsearch:โ€™ section add the index filename, for example:
Output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: "nfo-%{+yyyy.MM.dd}"
  1. In the processors section add this:
- decode_json_fields:
fields: ["message"]
target: ""
overwrite_keys: true
  1. Restart filebeat service

Configure Kibanaโ€‹

  1. In kibana.yml set host and port
  2. In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns
  3. Define the new index pattern

Using Logstashโ€‹

Logstash has a larger footprint, but enables you to filter and transform data, adding calculated fields at index time, if necessary.

Installation Stepsโ€‹

Install the following components:โ€‹

  1. Logstash: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
  2. Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html
  3. Kibana: https://www.elastic.co/guide/en/kibana/current/install.html
  4. Make sure all services are running

Configure Logstashโ€‹

  1. Download and add the following files to the logstash configurations conf.d directory (e.g. /etc/logstash/conf.d): nfo_mapping.jsonnfo.conf
  2. Modify input and output sections of nfo.conf to match your environment:
input {
udp {
port => 5514
codec => json
}
}
where 5514 is the logstash input port (it should match NFO UDP output port)
output {
elasticsearch {
hosts => ["http://<elasticsearch host IP>:9200"]
index => "nfo-logstash-%{+yyyy.MM.dd}"
template => "/etc/logstash/conf.d/nfo_mapping.json"
template_name => "nfo*"
}
}
  1. Restart Logstash service

Configure Kibanaโ€‹

  1. In kibana.yml set host and port
  2. In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns
  3. Define the new index pattern