Skip to main content
Version: Next

Performance Tuning and Scaling

To handle high-volume telemetry (e.g., >100k flows/sec), NFO requires specific optimizations at the Operating System and JVM levels. Without these adjustments, the OS may drop UDP packets before NFO can process them.

OS-Level Tuning (UDP Buffers)

By default, most Linux distributions have small UDP receive buffers. Under high load, the OS may drop packets before NFO can process them.

You can check if there are any packets lost on socket buffers in Linux kernel by execute the following command:

netstat -suna

Under Udp: section you should see:

   xxxxxxxxxx packets received
xxxxxxxxxx packets to unknown port received.
xxxxxxxxxx packet receive errors
xxxxxxxxxx packets sent
0 receive buffer errors
0 send buffer errors

If you see high counters in receive or send buffer errors, please see sections below.

Input Buffer Settings (Linux)

The default Linux kernel settings are not sufficient for high-volume packet rate. This can lead to dropped packets and data loss. We recommend that you change both the receive buffer size in NFO and the socket read buffer size in Linux kernel.

To set the size of the receive buffer to <N> bytes in NFO, add the following string to <nfo_home>/server/etc/server.cfg:

IT_RCVBUF <N>

The valid values for parameter N are 124928 through 56623104. The default value is 12582912.

To set the size of the socket read buffer in Linux kernel to <N> bytes for current session, execute (under root privileges) the following command in a console:

sysctl -w net.core.rmem_max=<N>

To make this change persistent, do the following:

  1. Create a file with the arbitrary name, for example nfo-custom.conf in the directory /etc/sysctl.d:
touch /etc/sysctl.d/nfo-custom.conf
  1. Add the following string to the file:
net.core.rmem_max=<N>
  1. Run the following command to reload the settings from the file:
sysctl -p
  1. Restart the network service to update the system changes. The command depends on the Linux platform you are using.

To check what the socket read buffer size is currently used, execute the following command:

sysctl net.core.rmem_max
note
  1. NFO effectively uses the least size of those buffers.
  2. NFO Virtual Appliance has the socket read buffer size 12582912 -- the default value for NFO receive buffer.

Output Buffer Settings (Linux)

NFO may produce a lot of syslog/JSON messages, sometimes in bursts (at the end of data collection interval). This is especially true when Top N is set to 0 (meaning to send all messages), or when several NFO Outputs are configured. This could lead to packet drops on the way out with default Linux kernel settings. We recommend that you change default and maximum socket send buffer size in Linux kernel.

To set the size of the default socket send buffer in Linux kernel to <N> bytes for the current session, execute (under root privileges) the following command in the console:

sysctl -w net.core.wmem_default=<N>

To set the size of the maximum socket send buffer in Linux kernel to <N> bytes for the current session, execute (under root privileges) the following command in the console:

sysctl -w net.core.wmem_max=<N>

To make this change persistent, add the following string to /etc/sysctl.conf:

net.core.wmem_max=<N>
net.core.wmem_default=<N>

Then run the following command to reload the settings from the file:

sysctl -p

To check what the socket send buffer size is currently used, execute the following command:

sysctl net.core.wmem_default

Another way to reduce output packets loss on the socket buffers is to configure output throttling in NFO. To enable output throttling with rate N per second, add the following lines to<nfo_home>/server/etc/server.cfg:

THROTTLE_OUTPUT 1
THROTTLE_OUTPUT_RATE <N>

JVM Memory Management

NFO runs on Java. For high-volume processing, the memory allocated to the NFO service (Heap Size) must be sufficient to handle the data in-flight.

  • Configuration: Adjust the -Xmx parameter in the NFO configuration files.
  • Guideline: For high-volume instances, we recommend at least 8GB of RAM dedicated to the JVM, depending on the number of active enrichments.

CPU Allocation

NFO is highly multi-threaded. In virtualized environments (VMware/KVM), ensure that NFO has dedicated vCPUs (no oversubscription) to maintain the real-time processing required for flow enrichment.