aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/devicetree/bindings/net/mdio.txt
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2017-04-26 14:44:39 -0400
committerDavid S. Miller <davem@davemloft.net>2017-04-26 14:44:39 -0400
commit4629498336e48516f5a03ee2788701f1d3f20168 (patch)
tree7ceebdc356adf410a9d78bd51d7e200697d19658 /Documentation/devicetree/bindings/net/mdio.txt
parentrhashtable: remove insecure_max_entries param (diff)
parenttcp: switch rcv_rtt_est and rcvq_space to high resolution timestamps (diff)
downloadlinux-dev-4629498336e48516f5a03ee2788701f1d3f20168.tar.xz
linux-dev-4629498336e48516f5a03ee2788701f1d3f20168.zip
Merge branch 'tcp-do-not-use-tcp_time_stamp-for-rcv-autotuning'
Eric Dumazet says: ==================== tcp: do not use tcp_time_stamp for rcv autotuning Some devices or linux distributions use HZ=100 or HZ=250 TCP receive buffer autotuning has poor behavior caused by this choice. Since autotuning happens after 4 ms or 10 ms, short distance flows get their receive buffer tuned to a very high value, but after an initial period where it was frozen to (too small) initial value. With BBR (or other CC allowing to increase BDP), we are willing to increase tcp_rmem[2], but this receive autotuning defect is a blocker for hosts dealing with gazillions of TCP flows in the data centers, since many of them have inflated RCVBUF. Risk of OOM is too high. Note that TSO autodefer, tcp cubic, and TCP TS options (RFC 7323) also suffer from our dependency to jiffies (via tcp_time_stamp). We have ongoing efforts to improve all that in the future. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions