aboutsummaryrefslogtreecommitdiffstatshomepage
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2025-08-02 12:48:55 +0200
committerThomas Gleixner <tglx@linutronix.de>2025-08-05 21:55:29 +0200
commitf74b9f4ba63ffdf597aaaa6cad7e284cb8e04820 (patch)
tree1e20097104c1ef5073c52e31a1b54b04414c5831
parentperf/core: Exit early on perf_mmap() fail (diff)
downloadwireguard-linux-f74b9f4ba63ffdf597aaaa6cad7e284cb8e04820.tar.xz
wireguard-linux-f74b9f4ba63ffdf597aaaa6cad7e284cb8e04820.zip
perf/core: Handle buffer mapping fail correctly in perf_mmap()
After successful allocation of a buffer or a successful attachment to an existing buffer perf_mmap() tries to map the buffer read only into the page table. If that fails, the already set up page table entries are zapped, but the other perf specific side effects of that failure are not handled. The calling code just cleans up the VMA and does not invoke perf_mmap_close(). This leaks reference counts, corrupts user->vm accounting and also results in an unbalanced invocation of event::event_mapped(). Cure this by moving the event::event_mapped() invocation before the map_range() call so that on map_range() failure perf_mmap_close() can be invoked without causing an unbalanced event::event_unmapped() call. perf_mmap_close() undoes the reference counts and eventually frees buffers. Fixes: b709eb872e19 ("perf: map pages in advance") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: stable@vger.kernel.org
-rw-r--r--kernel/events/core.c12
1 files changed, 10 insertions, 2 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c
index a2e3591175c6..4563bd864bbc 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7148,12 +7148,20 @@ aux_unlock:
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &perf_mmap_vmops;
- ret = map_range(rb, vma);
-
mapped = get_mapped(event, event_mapped);
if (mapped)
mapped(event, vma->vm_mm);
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+
return ret;
}