diff options
| author | 2007-08-09 14:53:36 +0300 | |
|---|---|---|
| committer | 2007-10-10 16:47:58 -0700 | |
| commit | 1b6d427bb7eb69e6dc4f194a5b0f4a382a16ff82 (patch) | |
| tree | d67f6ea9a5f581f83b4d8228fc2964c70f940d5a /net/ipv4/tcp_timer.c | |
| parent | [TCP]: Keep state in Disorder also if only lost_out > 0 (diff) | |
| download | wireguard-linux-1b6d427bb7eb69e6dc4f194a5b0f4a382a16ff82.tar.xz wireguard-linux-1b6d427bb7eb69e6dc4f194a5b0f4a382a16ff82.zip | |
[TCP]: Reduce sacked_out with reno when purging write_queue
Previously TCP had a transitional state during which reno
counted segments that are already below the current window into
sacked_out, which is now prevented. In addition, re-try now
the unconditional S+L skb catching.
This approach conservatively calls just remove_sack and leaves
reset_sack() calls alone. The best solution to the whole problem
would be to first calculate the new sacked_out fully (this patch
does not move reno_sack_reset calls from original sites and thus
does not implement this). However, that would require very
invasive change to fastretrans_alert (perhaps even slicing it to
two halves). Alternatively, all callers of tcp_packets_in_flight
(i.e., users that depend on sacked_out) should be postponed
until the new sacked_out has been calculated but it isn't any
simpler alternative.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp_timer.c')
0 files changed, 0 insertions, 0 deletions
