aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/netdevice.h
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2022-05-15 21:24:53 -0700
committerDavid S. Miller <davem@davemloft.net>2022-05-16 11:33:59 +0100
commit97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac (patch)
tree7e38c6a88703169d84365cb91c9aaab5aabe0adb /include/linux/netdevice.h
parentnet: tulip: convert to devres (diff)
downloadlinux-dev-97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac.tar.xz
linux-dev-97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac.zip
net: fix possible race in skb_attempt_defer_free()
A cpu can observe sd->defer_count reaching 128, and call smp_call_function_single_async() Problem is that the remote CPU can clear sd->defer_count before the IPI is run/acknowledged. Other cpus can queue more packets and also decide to call smp_call_function_single_async() while the pending IPI was not yet delivered. This is a common issue with smp_call_function_single_async(). Callers must ensure correct synchronization and serialization. I triggered this issue while experimenting smaller threshold. Performing the call to smp_call_function_single_async() under sd->defer_lock protection did not solve the problem. Commit 5a18ceca6350 ("smp: Allow smp_call_function_single_async() to insert locked csd") replaced an informative WARN_ON_ONCE() with a return of -EBUSY, which is often ignored. Test of CSD_FLAG_LOCK presence is racy anyway. Fixes: 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/netdevice.h')
-rw-r--r--include/linux/netdevice.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index d57ce248004c..cbaf312e365b 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3136,6 +3136,7 @@ struct softnet_data {
/* Another possibly contended cache line */
spinlock_t defer_lock ____cacheline_aligned_in_smp;
int defer_count;
+ int defer_ipi_scheduled;
struct sk_buff *defer_list;
call_single_data_t defer_csd;
};