Integration with Elasticsearch
NetFlow Optimizer send data over UDP protocol in syslog or JSON format which makes it easy to ingest into Elasticsearch, using Filebeat or Logstash or both.
Important: configure NFO output format as JSON

Using Filebeat

Filebeat has a small footprint and enables you to ship your flow data to Elasticsearch securely and reliably. Please note that Filebeat cannot add calculated fields at index time, and Logstash can be used with Filebeat if this is required. The steps below describe NFO -> Filebeat -> Elasticsearch - Kibana scenario.

Installation Steps

Install the following components:

    4.
    Make sure all services are running

Configure Filebeat

    1.
    Download nfo_fields.yml and add the nfo_fields.yml file to the filebeats configuration directory (e.g /etc/filebeat). This file contains the NFO field definitions for the template
    2.
    In the filebeats configuration directory edit the filebeat.yml file - Add after the ‘filebeat.inputs:’ line the following:
1
- type: udp
2
max_message_size: 10KiB
3
host: "0.0.0.0:5514"
Copied!
where 5514 is the filebeat input port (it should match NFO UDP output port)
3. After the ‘setup.template.settings:’ line add the following:
1
setup.template.enabled: true
2
setup.template.name: "nfo"
3
setup.template.pattern: "nfo-*"
4
setup.template.fields: "nfo_fields.yml"
5
setup.template.overwrite: true
6
#if ilm.enabled is set to false then in the outputs a custom index name can be specified
7
setup.ilm.enabled: false
Copied!
4. In the ‘output.elasticsearch:’ section add the index filename, for example:
1
Output.elasticsearch:
2
# Array of hosts to connect to.
3
hosts: ["localhost:9200"]
4
index: "nfo-%{+yyyy.MM.dd}"
Copied!
5. In the processors section add this:
1
- decode_json_fields:
2
fields: ["message"]
3
target: ""
4
overwrite_keys: true
Copied!
6. Restart filebeat service

Configure Kibana

    1.
    In kibana.yml set host and port
    2.
    In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns
    3.
    Define the new index pattern

Using Logstash

Logstash has a larger footprint, but enables you to filter and transform data, adding calculated fields at index time, if necessary.

Installation Steps

Install the following components:

    4.
    Make sure all services are running

Configure Logstash

    1.
    Download and add the following files to the logstash configurations conf.d directory (e.g. /etc/logstash/conf.d): nfo_mapping.json nfo.conf
    2.
    Modify input and output sections of nfo.conf to match your environment:
1
input {
2
udp {
3
port => 5514
4
codec => json
5
}
6
}
Copied!
where 5514 is the logstash input port (it should match NFO UDP output port)
1
output {
2
elasticsearch {
3
hosts => ["http://<elasticsearch host IP>:9200"]
4
index => "nfo-logstash-%{+yyyy.MM.dd}"
5
template => "/etc/logstash/conf.d/nfo_mapping.json"
6
template_name => "nfo*"
7
}
8
}
Copied!
3. Restart Logstash service

Configure Kibana

    1.
    In kibana.yml set host and port
    2.
    In Kibana go to (port 5601) %Kibana_server%/app/management/kibana/indexPatterns
    3.
    Define the new index pattern
Last modified 6mo ago