aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv6/icmp.c
diff options
context:
space:
mode:
authorDaniel Borkmann <daniel@iogearbox.net>2016-11-04 00:01:19 +0100
committerDavid S. Miller <davem@davemloft.net>2016-11-07 13:20:52 -0500
commit483bed2b0ddd12ec33fc9407e0c6e1088e77a97c (patch)
treeaa01c5eb2cc793ea5e3629ccf59c5977c28c0264 /net/ipv6/icmp.c
parentsctp: assign assoc_id earlier in __sctp_connect (diff)
downloadlinux-dev-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.tar.xz
linux-dev-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.zip
bpf: fix htab map destruction when extra reserve is in use
Commit a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") added an extra per-cpu reserve to the hash table map to restore old behaviour from pre prealloc times. When non-prealloc is in use for a map, then problem is that once a hash table extra element has been linked into the hash-table, and the hash table is destroyed due to refcount dropping to zero, then htab_map_free() -> delete_all_elements() will walk the whole hash table and drop all elements via htab_elem_free(). The problem is that the element from the extra reserve is first fed to the wrong backend allocator and eventually freed twice. Fixes: a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem") Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions