Data Outputs
Data Outputs represent the final stage of the NFO processing pipeline. After raw flows are ingested, decoded, and enriched with infrastructure and identity context, NFO formats this intelligence and delivers it to your security and operations platforms.
NFO supports a wide array of destinations, ranging from traditional SIEMs and log aggregators to modern cloud-native data lakes and streaming platforms.
Choosing Your Integration Path
To successfully integrate NFO with your platform, you need to configure two things:
- The Data Output Type: The technical protocol (HEC, API, or UDP) used to send data.
- The App/Content: Pre-built dashboards and parsers available in the Integrations & Apps section.
Platform Integration Map
Use the table below to identify the correct Data Output type for your specific platform:
| If your platform is... | Use this Data Output type | Available Content |
|---|---|---|
| Splunk | HTTP Event Collector (HEC) or OpenTelemetry | Splunk Integration |
| Microsoft Sentinel | Azure Log Analytics | Microsoft Sentinel Integration |
| CrowdStrike Falcon | HTTP Event Collector (HEC) (LogScale) | CrowdStrike Integration |
| Sumo Logic | JSON (UDP) | Sumo Logic Integration |
| Datadog | JSON (UDP) | Datadog Integration |
| New Relic | Syslog (UDP) | New Relic Integration |
| OpenSearch / Elastic | OpenSearch | Search |
| Exabeam | Syslog (UDP) | Exabeam Integration |
| SentinelOne (DataSet) | JSON (UDP) | DataSet Integration |
| Axoflow | JSON (UDP) | Axoflow Integration |
| VMware Aria Operations for Logs | Syslog (UDP) | VMware Aria Operations for Logs |
Output Categories
NFO categorizes outputs based on their delivery protocol and destination type:
1. API-Based Analytics
High-performance delivery using modern HTTP and REST APIs.
- HTTP Event Collector (HEC): Optimized for Splunk and CrowdStrike.
- Azure Log Analytics: Using the modern Logs Ingestion API (replaces the obsolete Data Collector API).
- Amazon OpenSearch: Direct delivery to managed or self-hosted clusters.
2. Standard Protocols
Universal formats used for observability pipelines and traditional SIEMs.
- Syslog (UDP): Used for Sumo Logic, Datadog, Exabeam, and SentinelOne.
- JSON (UDP): Structured delivery for Axoflow or custom collectors.
- Repeater (UDP): Forwarding raw or processed flows to multiple secondary destinations.
3. Cloud & Data Lakes
Scalable storage options for massive volumes of network telemetry.
- AWS S3: Writing partitioned logs directly to S3 buckets.
- Azure Blob Storage: Delivering data in Syslog or JSON format to Azure containers.
- Disk: Local file storage for debugging or manual processing.
4. Messaging & Analytical Databases
Integration with modern streaming architectures and column-oriented stores.
- Kafka: Publishing processed flows to Kafka topics (Syslog or JSON).
- ClickHouse: High-speed delivery to ClickHouse for real-time big data analysis.
- OpenTelemetry (OTLP): Sending metrics and logs in standardized OTLP format.
Core Output Principles
Regardless of the destination, NFO provides a standardized way to manage your data stream:
- Format Flexibility: Choose between JSON, Syslog (Key=Value), or specialized API formats (like Splunk HEC or Azure Logs Ingestion) to match your receiver's requirements.
- Field Selection: You have granular control over which metadata fields are included in the output. This allows you to balance data depth with storage costs.
- Output Dictionary: Standardize field names across your enterprise. If your SIEM expects
src_ipinstead ofsourceIPv4Address, the Output Dictionary handles this mapping automatically. - Load Balancing & Resiliency: Configure multiple endpoints for failover and high-availability delivery.
Output Filters
If you have only one output, the Output filter is not applicable, so it will revert to All.
You can set filters for each output:
| Output Filter | Description |
|---|---|
| All | Indicates the destination for all data generated by NFO, both by Modules and by Original NetFlow/IPFIX/sFlow one-to-one conversion |
| Modules Output Only | Indicates the destination will receive data only generated by enabled NFO Modules |
| Original NetFlow/IPFIX only | Indicates the destination for all flow data, translated into syslog or JSON, one-to-one. NetFlow/IPFIX Options from Original Flow Data translated into syslog or JSON, one-to-one, also sent to this output. Use this option to archive all underlying flow records NFO processes for forensics. This destination is typically Hadoop or another inexpensive storage, as the volume for this destination can be quite high |
| Original sFlow only | Indicates the destination for sFlow data, translated into syslog or JSON, one-to-one. Use this option to archive all underlying sFlow records NFO processes for forensics. This destination is typically configured to send output to inexpensive syslog storage, such as the volume for this destination can be quite high |
Module Filter
Here you can enter a comma-separated list of nfc_id for the Modules to be included in this output destination.
Global Output Services
Before configuring a specific destination, familiarize yourself with these shared services that govern how data is formatted:
- Output Dictionary: Learn how to rename fields and maintain consistency across disparate data sets.
- Original Flow Data: Configure a separate, voluminous stream for raw flow data (one-to-one conversion) while keeping enriched alerts in your primary SIEM.