Deployment Architecture Considerations
Standalone vs. Distributed Deployment
- Standalone (Single Instance): All NFO components (Server, Controller, and optional EDFN) are consolidated on a single machine. This is suitable for smaller to medium environments, proof-of-concept evaluations, or centralized networks. Installation details are covered in the NFO Installation Guide.
- Distributed (NFO Central): This architecture is recommended when horizontal scalability is needed to process extremely high volumes of NetFlow data (typically exceeding 300,000 flows per second) or when intelligent traffic distribution and centralized management are required. It involves separating the core functions onto dedicated nodes:
- NFO Central: The unified management hub, serving as the Intelligent Load Balancer.
- NFO Peer Nodes: Dedicated servers for data collection, processing, and output.
 
Cloud vs. On-Premises Deployment
- On-Premises: Deploying NFO on your organization's physical or virtual infrastructure. Follow the Installation Guide for the chosen operating system.
- Cloud (Strategic Considerations): If deploying in the cloud (AWS, Azure, GCP), consider:
- Instance Selection: Choose appropriate virtual machine sizes based on the capacity planning outlined in the Administration Guide.
- Network Configuration: Configure cloud-based firewalls (Security Groups, Network Security Groups) to allow necessary inbound (flow data, GUI access) and outbound (to destination systems) traffic.
 
Single Instance Deployment
In this scenario, one instance of NetFlow Optimizer handles all flow data processing, enrichment, and SNMP polling. A single-instance deployment can be useful for evaluation purposes and might be sufficient to serve the needs of small to medium size organizations.

Distributed Deployment with NFO Central
For large-scale, high-volume environments, NFO Central is the recommended solution, replacing ad-hoc flow replication with intelligent, load-balanced distribution. This centralized control plane ensures stability, performance, and simplified management.
Architecture Components
- NFO Central (Control Plane): Receives all flow data, performs health checks on peers, and distributes traffic. It also manages licenses centrally.
- NFO Peer Nodes (Data Plane): Dedicated to flow processing, enrichment, and output to downstream collectors.
Distributed Deployment – Horizontal Scalability (Intelligent Load Balancing)
This scenario addresses the need for increased processing capacity by intelligently distributing the workload across multiple NFO Peer Nodes. This is the preferred method for high-volume environments.

- Flow Ingestion: NFO Central is configured as the sole point of ingress, receiving all NetFlow data from your network devices.
- Intelligent Distribution: NFO Central utilizes its Intelligent Load Balancer functionality to continuously monitor the load of NFO Peer Nodes (e.g., NFO-1 and NFO-2). It then dynamically and automatically distributes the incoming flow streams to the least-loaded peer.
- Session Stickiness: The load balancer ensures that traffic from a single exporter remains consistent (sticky) to the same Peer Node (NFO-1 or NFO-2).
- Independent Processing: NFO-1 and NFO-2 independently process the NetFlow data they receive, applying the necessary NFO Logic Modules and configurations.
- Centralized Output: Both NFO-1 and NFO-2 are configured to send their processed output data to the same central SIEM or analytics platform.
Distributed Deployment – Various Configurations (Targeted Processing)
This scenario is ideal when you need to apply different sets of processing rules or configurations to flow data originating from specific exporters.

- Centralized Flow Reception: NFO Central is configured to receive all NetFlow data from your network infrastructure.
- Targeted Flow Forwarding: NFO Central's load balancing rules are configured to forward flow data based on the originating device.
- Traffic from your core switches and routers (e.g., NFv5) is directed to NFO-1.
- Traffic flows from your edge devices (e.g., IPFIX format) are forwarded to NFO-2.
 
- Specialized Processing: NFO-1 is configured for specific processing (e.g., top traffic analysis). NFO-2 is configured for different optimization (e.g., detailed security analysis).
- Special Node Exclusion: You may use the NFO Central GUI to exclude a Peer Node from the load balancing pool, ensuring it receives all flows for specific, unique processing tasks that require the entire data stream.
Distributed Deployment – Various Outputs (Parallel Processing)
This scenario addresses the requirement to process the same flow data stream with different rules and send the resulting information to different types of storage or analysis platforms simultaneously.

- Centralized Flow Ingestion: NFO Central is configured to receive all NetFlow data from your network.
- Parallel Flow Processing: NFO Central utilizes its intelligent load balancing to send identical copies of the flow streams to multiple peers:
- A full stream is forwarded to NFO-1. NFO-1 processes the data for generating security and critical event data for your primary SIEM.
- A full stream is also forwarded to NFO-2. NFO-2 processes the data for long-term traffic trends and archives the output to a cost-effective storage solution like AWS S3.
 
- Centralized Control: The entire deployment remains managed through the NFO Central GUI, which tracks the licenses and health of NFO-1 and NFO-2.
These distributed deployment scenarios offer flexibility in how you leverage NetFlow Optimizer to meet your specific network monitoring, security, and analytical requirements. Remember to consult the NFO Administration Guide for detailed configuration instructions on Repeater functionality and managing multiple NFO instances.
Distributed Deployment in a Hybrid Environment
A hybrid environment, combining on-premises infrastructure with cloud resources, presents unique challenges and opportunities for network flow collection and analysis. NetFlow Optimizer (NFO) can be strategically deployed in such environments to provide comprehensive visibility across your entire infrastructure.
For high-volume scenarios, the NFO Central component is essential, as it manages the intelligent distribution of flow data across peer nodes, regardless of whether the data originates from your physical data center or a cloud platform (such as VPC Flow Logs). Consider the following scenarios for collecting flow data from both your physical and cloud resources.
On-Premises SIEM with Cloud Flow Collection
This scenario addresses the case where your primary Security Information and Event Management (SIEM) system is hosted within your on-premises data center, and you need to collect flow data from both your physical network devices and your cloud-based Virtual Private Cloud (VPC) environments.

- On-Premises Flow Collection: Within your data center, you would deploy one or more NFO instances to collect NetFlow, IPFIX, or sFlow data from your physical switches, routers, firewalls, and other network devices. For high-volume environments, it is recommended to deploy NFO Central as the single ingestion point, which then intelligently distributes the data to multiple NFO Peer Nodes for processing, following the principles outlined in the "Distributed Deployment on Premises" section. These NFO instances (or peer nodes) would then process the flow data according to your defined rules and configurations.
- Cloud Flow Collection: To gather flow logs from your cloud environment (e.g., AWS VPC Flow Logs, Azure Network Watcher flow logs, GCP VPC Flow Logs), you would typically deploy a separate EDFN instance within the cloud environment, ideally co-located within the same region as your VPCs. This cloud-based EDFN instance would be configured to ingest the native cloud flow logs, convert them to IPFIX, and send them to NFO instance insatalled on premises.
- Centralized Output to On-Premises SIEM: The key to this scenario is to ensure that the flow data collected and processed by both the on-premises NFO instances and the cloud-based EDFN instance is ultimately sent to your central SIEM located in your data center. This requires establishing secure network connectivity between your cloud environment and your on-premises network, allowing the cloud-based EDFN instance to forward its processed data to the SIEM. This might involve VPN tunnels or dedicated network connections.
Cloud-Based SIEM with Hybrid Flow Collection
If your organization has adopted a cloud-first strategy and your SIEM solution is hosted in the cloud, the deployment architecture for NFO would shift accordingly. This scenario illustrates a recommended deployment approach in such cases.

- On-Premises Flow Collection and Forwarding: In your on-premises data center, you would still deploy one or more NFO instances to collect flow data from your physical network devices. For high-volume environments, it is recommended to deploy NFO Central as the unified management and ingestion point, which then distributes traffic to multiple NFO Peer Nodes for processing. Instead of sending the processed data to an on-premises SIEM, these NFO instances (or peer nodes) would be configured to securely forward their output to your cloud-based SIEM, requiring secure network connectivity from your data center to your cloud environment.
- Cloud Flow Collection: Similar to the previous scenario, you would deploy an NFO instance within your cloud environment, ideally near your VPCs, to ingest the native cloud flow logs. For very high-volume cloud ingestion, this may involve deploying NFO Central in the cloud to manage multiple NFO Peer Nodes within the cloud environment. This cloud-based NFO instance (or cluster) would then directly forward its processed data to the cloud-based SIEM.
- Centralized Output in the Cloud: In this architecture, the processed flow data originating from both your on-premises network and your cloud environment converges and is analyzed within your cloud-based SIEM.
Key Considerations for Hybrid Deployments
- Network Connectivity: Establishing reliable and secure network connectivity between all NFO instances and your SIEM is crucial.
- Security: Ensure all communication channels are secured using appropriate encryption.
- Cost Optimization: Right-size your cloud-based NFO instances to manage data transfer and storage costs.
- Centralized Management: Utilize NFO Central to manage the configuration and licenses of all geographically distributed NFO Peer Nodes.
By strategically deploying NFO in a hybrid environment, you can gain comprehensive visibility into network traffic patterns and security events across your entire infrastructure, regardless of where your resources are located. Remember to consult the NFO Administration Guide for detailed configuration instructions on connecting NFO instances and forwarding data to various destinations.
High Availability (HA) Planning (Conceptual)
If high availability is a requirement, the High Availability Deployment section of the Administration Guide will detail the steps for configuring HA features.