Skip to main content
Version: 2.12.0

Data Outputs

Data Outputs represent the final stage of the NFO processing pipeline. After raw flows are ingested, decoded, and enriched with infrastructure and identity context, NFO formats this intelligence and delivers it to your security and operations platforms.

NFO supports a wide array of destinations, ranging from traditional SIEMs and log aggregators to modern cloud-native data lakes and streaming platforms.

Choosing Your Integration Path

To successfully integrate NFO with your platform, you need to configure two things:

  1. The Data Output Type: The technical protocol (HEC, API, or UDP) used to send data.
  2. The App/Content: Pre-built dashboards and parsers available in the Integrations & Apps section.

Platform Integration Map

Use the table below to identify the correct Data Output type for your specific platform:

If your platform is...Use this Data Output typeAvailable Content
SplunkHTTP Event Collector (HEC) or OpenTelemetrySplunk Integration
Microsoft SentinelAzure Log AnalyticsMicrosoft Sentinel Integration
CrowdStrike FalconHTTP Event Collector (HEC) (LogScale)CrowdStrike Integration
Sumo LogicJSON (UDP)Sumo Logic Integration
DatadogJSON (UDP)Datadog Integration
New RelicSyslog (UDP)New Relic Integration
OpenSearch / ElasticOpenSearchSearch
ExabeamSyslog (UDP)Exabeam Integration
SentinelOne (DataSet)JSON (UDP)DataSet Integration
AxoflowJSON (UDP)Axoflow Integration
VMware Aria Operations for LogsSyslog (UDP)VMware Aria Operations for Logs

Output Categories

NFO categorizes outputs based on their delivery protocol and destination type:

1. API-Based Analytics

High-performance delivery using modern HTTP and REST APIs.

2. Standard Protocols

Universal formats used for observability pipelines and traditional SIEMs.

  • Syslog (UDP): Used for Sumo Logic, Datadog, Exabeam, and SentinelOne.
  • JSON (UDP): Structured delivery for Axoflow or custom collectors.
  • Repeater (UDP): Forwarding raw or processed flows to multiple secondary destinations.

3. Cloud & Data Lakes

Scalable storage options for massive volumes of network telemetry.

  • AWS S3: Writing partitioned logs directly to S3 buckets.
  • Azure Blob Storage: Delivering data in Syslog or JSON format to Azure containers.
  • Disk: Local file storage for debugging or manual processing.

4. Messaging & Analytical Databases

Integration with modern streaming architectures and column-oriented stores.

  • Kafka: Publishing processed flows to Kafka topics (Syslog or JSON).
  • ClickHouse: High-speed delivery to ClickHouse for real-time big data analysis.
  • OpenTelemetry (OTLP): Sending metrics and logs in standardized OTLP format.

Core Output Principles

Regardless of the destination, NFO provides a standardized way to manage your data stream:

  • Format Flexibility: Choose between JSON, Syslog (Key=Value), or specialized API formats (like Splunk HEC or Azure Logs Ingestion) to match your receiver's requirements.
  • Field Selection: You have granular control over which metadata fields are included in the output. This allows you to balance data depth with storage costs.
  • Output Dictionary: Standardize field names across your enterprise. If your SIEM expects src_ip instead of sourceIPv4Address, the Output Dictionary handles this mapping automatically.
  • Load Balancing & Resiliency: Configure multiple endpoints for failover and high-availability delivery.

Output Filters

note

If you have only one output, the Output filter is not applicable, so it will revert to All.

You can set filters for each output:

Output FilterDescription
AllIndicates the destination for all data generated by NFO, both by Modules and by Original NetFlow/IPFIX/sFlow one-to-one conversion
Modules Output OnlyIndicates the destination will receive data only generated by enabled NFO Modules
Original NetFlow/IPFIX onlyIndicates the destination for all flow data, translated into syslog or JSON, one-to-one. NetFlow/IPFIX Options from Original Flow Data translated into syslog or JSON, one-to-one, also sent to this output. Use this option to archive all underlying flow records NFO processes for forensics. This destination is typically Hadoop or another inexpensive storage, as the volume for this destination can be quite high
Original sFlow onlyIndicates the destination for sFlow data, translated into syslog or JSON, one-to-one. Use this option to archive all underlying sFlow records NFO processes for forensics. This destination is typically configured to send output to inexpensive syslog storage, such as the volume for this destination can be quite high

Module Filter

Here you can enter a comma-separated list of nfc_id for the Modules to be included in this output destination.


Global Output Services

Before configuring a specific destination, familiarize yourself with these shared services that govern how data is formatted:

  • Output Dictionary: Learn how to rename fields and maintain consistency across disparate data sets.
  • Original Flow Data: Configure a separate, voluminous stream for raw flow data (one-to-one conversion) while keeping enriched alerts in your primary SIEM.