aboutsummaryrefslogtreecommitdiffstats
path: root/net/netfilter/ipset/ip_set_hash_gen.h
diff options
context:
space:
mode:
authorFlorian Westphal <fw@strlen.de>2017-11-30 21:08:05 +0100
committerPablo Neira Ayuso <pablo@netfilter.org>2018-01-08 18:01:04 +0100
commita778a15fa5cf5f632cd55845f548189a29e9b57b (patch)
treee9259d75cb6a7342e4cacf2d982ff6ef812a8b6d /net/netfilter/ipset/ip_set_hash_gen.h
parentnetfilter: ipset: use nfnl_mutex_is_locked (diff)
downloadlinux-dev-a778a15fa5cf5f632cd55845f548189a29e9b57b.tar.xz
linux-dev-a778a15fa5cf5f632cd55845f548189a29e9b57b.zip
netfilter: ipset: add resched points during set listing
When sets are extremely large we can get softlockup during ipset -L. We could fix this by adding cond_resched_rcu() at the right location during iteration, but this only works if RCU nesting depth is 1. At this time entire variant->list() is called under under rcu_read_lock_bh. This used to be a read_lock_bh() but as rcu doesn't really lock anything, it does not appear to be needed, so remove it (ipset increments set reference count before this, so a set deletion should not be possible). Reported-by: Li Shuang <shuali@redhat.com> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to '')
-rw-r--r--net/netfilter/ipset/ip_set_hash_gen.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index efffc8eabafe..8ef079db7d34 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -1143,6 +1143,7 @@ mtype_list(const struct ip_set *set,
rcu_read_lock();
for (; cb->args[IPSET_CB_ARG0] < jhash_size(t->htable_bits);
cb->args[IPSET_CB_ARG0]++) {
+ cond_resched_rcu();
incomplete = skb_tail_pointer(skb);
n = rcu_dereference(hbucket(t, cb->args[IPSET_CB_ARG0]));
pr_debug("cb->arg bucket: %lu, t %p n %p\n",