您尚未登录。

#1 2025-01-20 19:56:56

Kurobac
kuro
所在地: 宛平南路600号
注册时间: 2018-05-12
帖子: 104

网络性能问题

家里有两台PC,一台是Windows一台是Arch,都使用的AQC113芯片的万兆网卡,MTU9000,交换机是TP link st2008,用ping测试过链路上的设备都可以支持9000的MTU。
在两台电脑都跑Windows时,两台PC之间iperf3 可以跑到9.7G左右,所以硬件方面应该不会成为瓶颈。
但在用Arch的话网络性能有点低。如果在Arch上跑iperf server,windows跑client只能跑到7G出头,但是反过来的话速度就能跑到9.7G以上。用SMB测试也是差不多的结果。
比如这样

Server listening on 5201 (test #2)
-----------------------------------------------------------
Accepted connection from 192.168.1.203, port 53346
[  5] local 192.168.1.7 port 5201 connected to 192.168.1.203 port 53347
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   872 MBytes  7.31 Gbits/sec                  
[  5]   1.00-2.00   sec   869 MBytes  7.29 Gbits/sec                  
[  5]   2.00-3.00   sec   845 MBytes  7.09 Gbits/sec                  
[  5]   3.00-4.00   sec   827 MBytes  6.93 Gbits/sec                  
[  5]   4.00-5.00   sec   845 MBytes  7.09 Gbits/sec                  
[  5]   5.00-6.00   sec   840 MBytes  7.04 Gbits/sec                  
[  5]   6.00-7.00   sec   826 MBytes  6.93 Gbits/sec                  
[  5]   7.00-8.00   sec   854 MBytes  7.17 Gbits/sec                  
[  5]   8.00-9.00   sec   800 MBytes  6.71 Gbits/sec                  
[  5]   9.00-10.00  sec   811 MBytes  6.80 Gbits/sec                  
[  5]  10.00-10.00  sec  2.38 MBytes  7.12 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  8.19 GBytes  7.04 Gbits/sec                  receiver
-----------------------------------------------------------
kurobac:~/ $ iperf3 -c 192.168.1.203                                              [19:28:36]
Connecting to host 192.168.1.203, port 5201
[  5] local 192.168.1.7 port 49686 connected to 192.168.1.203 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.11 GBytes  9.55 Gbits/sec    0    428 KBytes       
[  5]   1.00-2.00   sec  1.14 GBytes  9.78 Gbits/sec    0    428 KBytes       
[  5]   2.00-3.00   sec  1.14 GBytes  9.81 Gbits/sec    0    428 KBytes       
[  5]   3.00-4.00   sec  1.14 GBytes  9.76 Gbits/sec    0    428 KBytes       
[  5]   4.00-5.00   sec  1.14 GBytes  9.79 Gbits/sec    0    428 KBytes       
[  5]   5.00-6.00   sec  1.15 GBytes  9.84 Gbits/sec    0    428 KBytes       
[  5]   6.00-7.00   sec  1.13 GBytes  9.72 Gbits/sec    0    428 KBytes       
[  5]   7.00-8.00   sec  1.13 GBytes  9.73 Gbits/sec    0    428 KBytes       
[  5]   8.00-9.00   sec  1.13 GBytes  9.75 Gbits/sec    0    428 KBytes       
[  5]   9.00-10.00  sec  1.14 GBytes  9.79 Gbits/sec    0    428 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.4 GBytes  9.75 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  11.4 GBytes  9.75 Gbits/sec                  receiver

iperf Done.

我测试了linux-zen,linux-lts还有cachyos的内核,性能差异5%以内。
https://wiki.archlinux.org/title/Sysctl 试着配置了以下,但基本没任何提升(感觉对于10G这个速率来说应该默认不会有什么瓶颈才对)。

net.core.netdev_max_backlog = 16384
net.core.somaxconn = 8192
net.ipv4.tcp_fastopen = 3
net.core.rmem_default = 1048576
net.core.rmem_max = 16777216
net.core.wmem_default = 1048576
net.core.wmem_max = 16777216
net.core.optmem_max = 65536
net.ipv4.tcp_rmem = 4096 1048576 2097152
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_mtu_probing = 1

另外也尝试了启用BBR,也没什么变化。

用Livecd测试了一下其他发行版,ubuntu 24.10 和gentoo跑iperf server都能跑到8G左右,用cachy os能跑到9.3G……

另外,开启多线程或者使用更大的buffer size确实可以跑满带宽,但我还是想不明白单线程的情况下为什么会有如此大的差别……


echo "喜报:您的电脑上安装了$(locate "chrome-sandbox" | wc -l)个 Chromium\!"

离线

页脚