aboutsummaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2021-05-03 18:40:17 -0700
committerDavid S. Miller <davem@davemloft.net>2021-05-03 18:40:17 -0700
commit1682d8df20aa505f6ab12c76e934b26ede39c529 (patch)
treec3534a6946f300d4fc0d565891d2806983fa9a5b /net
parentDocumentation: ABI: sysfs-class-net-qmi: document pass-through file (diff)
parentxsk: Fix for xp_aligned_validate_desc() when len == chunk_size (diff)
downloadlinux-dev-1682d8df20aa505f6ab12c76e934b26ede39c529.tar.xz
linux-dev-1682d8df20aa505f6ab12c76e934b26ede39c529.zip
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says: ==================== pull-request: bpf 2021-05-04 The following pull-request contains BPF updates for your *net* tree. We've added 5 non-merge commits during the last 4 day(s) which contain a total of 6 files changed, 52 insertions(+), 30 deletions(-). The main changes are: 1) Fix libbpf overflow when processing BPF ring buffer in case of extreme application behavior, from Brendan Jackman. 2) Fix potential data leakage of uninitialized BPF stack under speculative execution, from Daniel Borkmann. 3) Fix off-by-one when validating xsk pool chunks, from Xuan Zhuo. 4) Fix snprintf BPF selftest with a pid filter to avoid racing its output test buffer, from Florent Revest. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r--net/xdp/xsk_queue.h7
1 files changed, 3 insertions, 4 deletions
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 2ac3802c2cd7..9d2a89d793c0 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -128,13 +128,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
struct xdp_desc *desc)
{
- u64 chunk, chunk_end;
+ u64 chunk;
- chunk = xp_aligned_extract_addr(pool, desc->addr);
- chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len);
- if (chunk != chunk_end)
+ if (desc->len > pool->chunk_size)
return false;
+ chunk = xp_aligned_extract_addr(pool, desc->addr);
if (chunk >= pool->addrs_cnt)
return false;