Skip to main content
Version: Next

Data Outputs

Data Outputs represent the final stage of the NFO processing pipeline. After raw flows are ingested, decoded, and enriched with infrastructure and identity context, NFO formats this intelligence and delivers it to your security and operations platforms.

NFO supports a wide array of destinations, ranging from traditional SIEMs and log aggregators to modern cloud-native data lakes and streaming platforms.

Choosing Your Integration Path

To successfully integrate NFO with your platform, you need to configure two things:

  1. The Data Output Type: The technical protocol (HEC, API, or UDP) used to send data.
  2. The App/Content: Pre-built dashboards and parsers available in the Integrations & Apps section.

Platform Integration Map

Use the table below to identify the correct Data Output type for your specific platform:

If your platform is...Use this Data Output typeAvailable Content
SplunkHTTP Event Collector (HEC) or OpenTelemetrySplunk Integration
Microsoft SentinelAzure Log AnalyticsMicrosoft Sentinel Integration
CrowdStrike FalconHTTP Event Collector (HEC) (LogScale)CrowdStrike Integration
DatadogJSON (UDP)Datadog Integration
New RelicSyslog (UDP)New Relic Integration
Sumo LogicJSON (UDP)Sumo Logic Integration
OpenSearch / ElasticOpenSearchSearch
ExabeamSyslog (UDP)Exabeam Integration
SentinelOne (DataSet)JSON (UDP)DataSet Integration
AxoflowJSON (UDP)Axoflow Integration
vRealize Log InsightSyslog (UDP)vRealize Content Pack

Output Categories

NFO categorizes outputs based on their delivery protocol and destination type:

1. API-Based Analytics

High-performance delivery using modern HTTP and REST APIs.

2. Standard Protocols

Universal formats used for observability pipelines and traditional SIEMs.

  • Syslog (UDP): Used for Sumo Logic, Datadog, Exabeam, and SentinelOne.
  • JSON (UDP): Structured delivery for Axoflow or custom collectors.
  • Repeater (UDP): Forwarding raw or processed flows to multiple secondary destinations.

3. Cloud & Data Lakes

Scalable storage options for massive volumes of network telemetry.

  • AWS S3: Writing partitioned logs directly to S3 buckets.
  • Azure Blob Storage: Delivering data in Syslog or JSON format to Azure containers.
  • Disk: Local file storage for debugging or manual processing.

4. Messaging & Analytical Databases

Integration with modern streaming architectures and column-oriented stores.

  • Kafka: Publishing processed flows to Kafka topics (Syslog or JSON).
  • ClickHouse: High-speed delivery to ClickHouse for real-time big data analysis.
  • OpenTelemetry (OTLP): Sending metrics and logs in standardized OTLP format.

Core Output Principles

Regardless of the destination, NFO provides a standardized way to manage your data stream:

  • Format Flexibility: Choose between JSON, Syslog (Key=Value), or specialized API formats (like Splunk HEC or Azure Logs Ingestion) to match your receiver's requirements.
  • Field Selection: You have granular control over which metadata fields are included in the output. This allows you to balance data depth with storage costs.
  • Output Dictionary: Standardize field names across your enterprise. If your SIEM expects src_ip instead of sourceIPv4Address, the Output Dictionary handles this mapping automatically.
  • Load Balancing & Resiliency: Configure multiple endpoints for failover and high-availability delivery.

Global Output Services

Before configuring a specific destination, familiarize yourself with these shared services that govern how data is formatted:

  • Output Dictionary: Learn how to rename fields and maintain consistency across disparate data sets.
  • Original Flow Data: Configure a separate, voluminous stream for raw flow data (one-to-one conversion) while keeping enriched alerts in your primary SIEM.
  • Field Selection & Formatting: A guide to the nfc_id filtering system and choosing which enrichment metadata to include in your output.