Configure Outputs
NetFlow Optimizer processes, enriches, and optimizes your network flow and telemetry data, making it ready for integration with a variety of downstream systems. NFO acts as a powerful data transformation and distribution hub, ensuring your valuable network intelligence reaches the platforms where it's most effective for analysis, security operations, and reporting.
NFO supports sending processed data to a wide range of destinations, including popular Security Information and Event Management (SIEM) systems, like Splunk or Exabeam, data lakes for long-term storage and big data analytics, AWS S3 buckets, and specialized observability pipelines like Axoflow and Cribl.
Click on the symbol to add data outputs and select desired Output Type from the following drp-down.
You may add up to sixteen output destinations, specifying the format and the kind of data to be sent to each destination. To configure specific output type, please refer to the corresponding sections in this guide:
Output Types
NFO supports the following types of outputs:
Type | Description |
---|---|
Syslog (UDP) | Indicates the destination where data is sent in syslog format |
Syslog (JSON) | Indicates the destination where data is sent in JSON format |
AWS S3 | Indicates the destination is AWS S3 buckets |
Disk | Indicates the destination is a disk |
Splunk HEC | Indicates that the destination is Splunk HEC. NFO sends data to Splunk HEC in key=value format |
Splunk Observability Metrics (deprecated) | Indicates that the destination is Splunk Observability Cloud (SingalFX) |
Azure Blob Storage Syslog | Indicates the destination is Azure Blob Storage in Syslog format |
Azure Blob Storage JSON | Indicates the destination is Azure Blob Storage in JSON format |
Azure Log Analytics Workspace | Indicates the destination is Microsoft Azure Log Analytics Workspace (Azure Monitor, Sentinel) |
Kafka Syslog | Indicates the destination is Kafka in Syslog format |
Kafka JSON | Indicates the destination is Kafka in JSON format |
OpenSearch | Indicates the destination is OpenSearch (e.g. Amazon OpenSearch Service) |
OpenTelemetry | Indicates the destination that supports OpenTelemetry Metrics. Choose this destination type for Splunk Observability Cloud |
ClickHouse | Indicates the destination of your ClickHouse database |
Repeater (UDP) | Indicates that flow data received by NFO should be retransmitted to that destination, e.g your legacy NetFlow collector or another NFO instance |
Output Filters
If you have only one output, the Output filter is not applicable, so it will revert to All.
You can set filters for each output:
Output Filter | Description |
---|---|
All | Indicates the destination for all data generated by NFO, both by Modules and by Original NetFlow/IPFIX/sFlow one-to-one conversion |
Modules Output Only | Indicates the destination will receive data only generated by enabled NFO Modules |
Original NetFlow/IPFIX only | Indicates the destination for all flow data, translated into syslog or JSON, one-to-one. NetFlow/IPFIX Options from Original Flow Data translated into syslog or JSON, one-to-one, also sent to this output. Use this option to archive all underlying flow records NFO processes for forensics. This destination is typically Hadoop or another inexpensive storage, as the volume for this destination can be quite high |
Original sFlow only | Indicates the destination for sFlow data, translated into syslog or JSON, one-to-one. Use this option to archive all underlying sFlow records NFO processes for forensics. This destination is typically configured to send output to inexpensive syslog storage, such as the volume for this destination can be quite high |
Module Filter
Here you can enter a comma-separated list of nfc_id
for the Modules to be included in this output destination.