Tuning
Power Management Tuning
The following steps are for Intel processors. As other processors are used to implement a Fabric Node the steps for those systems will be added.
Depending on the processor used, it may be useful to enable or disable power state settings to ensure the system can be used fully. The specifics are dependent on the processor platform. The tuning guide details the steps Eluvio has taken on Intel hardware.
Edit /etc/default/grub
so that the GRUB_CMDLINE_LINUX_DEFAULT
looks like the following:
GRUB_CMDLINE_LINUX_DEFAULT="intel_idle.max_cstate=0 processor.max_cstate=0 intel_pstate=disable vt.handoff=1"
Network Tuning
Network tuning is intended to prepare nodes for serving large numbers of connections over a WAN. To do so, TCP parameters are increased and the BBR congestion algorithm is set to take advantage of the latest in TCP/IP networking in the Linux kernel.
The following can be added to /etc/sysctl.conf
although it is advised to place this in a file in /etc/sysctl.d
. The preferred file is /etc/sysctl.d/90-eluvio.conf
net.core.rmem_max = 262144000
net.core.rmem_default = 262144000
net.core.netdev_max_backlog = 10000
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
net.core.somaxconn = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.netfilter.nf_conntrack_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 65536 33554432
net.ipv4.tcp_mtu_probing = 1
Warning
These parameters may not universally apply. They work for Eluvio managed Fabric Nodes, but hardware and network configurations vary. Review each to ensure you are comfortable with the ramifications of each parameter.Process Limit Tuning
The qfab
process will need to support a large number of connections. This will force the process to use more than the default number of file descriptors. This limit is increased to support the expected load a server may encounter.
Service limits
The service definition system-wide or per service can be altered to allow a large or infinite number of files to be open by a processes. In each systemd
service definition there is a [Service]
section. Add the following to the [Service]
section of your service once setup:
LimitNOFILE=1000000
or
LimitNOFILE=infinity
Depending on your level of comfort.
If you are not using a Service definition, you will need to set it system-wide in the [Manager]
section of /etc/systemd/system.conf
like so:
DefaultLimitNOFILE=1000000
In either case, you need to reload the definitions:
sudo systemctl daemon-reload