aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/Documentation/gpu
diff options
context:
space:
mode:
authorDave Airlie <airlied@redhat.com>2025-03-11 10:26:08 +1000
committerDave Airlie <airlied@redhat.com>2025-03-11 10:26:17 +1000
commit11a5c6445ab86f2562510b46355201012352c9c5 (patch)
tree59e206f3a15ce47a3fcad600be094d88a0b18dca /Documentation/gpu
parentMerge tag 'drm-msm-next-2025-03-09' of https://gitlab.freedesktop.org/drm/msm into drm-next (diff)
parentdrm/doc: gpusvm: Add GPU SVM documentation (diff)
downloadwireguard-linux-11a5c6445ab86f2562510b46355201012352c9c5.tar.xz
wireguard-linux-11a5c6445ab86f2562510b46355201012352c9c5.zip
Merge tag 'drm-xe-next-2025-03-07' of https://gitlab.freedesktop.org/drm/xe/kernel into drm-next
UAPI Changes: - Expose per-engine activity via perf pmu (Riana, Lucas, Umesh) - Add support for EU stall sampling (Harish, Ashutosh) - Allow userspace to provide low latency hint for submission (Tejas) - GPU SVM and Xe SVM implementation (Matthew Brost) Cross-subsystem Changes: - devres handling for component drivers (Lucas) - Backmege drm-next to allow cross dependent change with i915 - GPU SVM and Xe SVM implementation (Matthew Brost) Core Changes: Driver Changes: - Fixes to userptr and missing validations (Matthew Auld, Thomas Hellström, Matthew Brost) - devcoredump typos and error handling improvement (Shuicheng) - Allow oa_exponent value of 0 (Umesh) - Finish moving device probe to devm (Lucas) - Fix race between submission restart and scheduled being freed (Tejas) - Fix counter overflows in gt_stats (Francois) - Refactor and add missing workarounds and tunings for pre-Xe2 platforms (Aradhya, Tvrtko) - Fix PXP locks interaction with exec queues being killed (Daniele) - Eliminate TIMESTAMP_OVERRIDE from xe (Matt Roper) - Change xe_gen_wa_oob to allow building on MacOS (Daniel Gomez) - New workarounds for Panther Lake (Tejas) - Fix VF resume errors (Satyanarayana) - Fix workaround infra skipping some workarounds dependent on engine initialization (Tvrtko) - Improve per-IP descriptors (Gustavo) - Add more error injections to probe sequence (Francois) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/ilc5jvtyaoyi6woyhght5a6sw5jcluiojjueorcyxbynrcpcjp@mw2mi6rd6a7l
Diffstat (limited to 'Documentation/gpu')
-rw-r--r--Documentation/gpu/rfc/gpusvm.rst107
-rw-r--r--Documentation/gpu/rfc/index.rst4
2 files changed, 111 insertions, 0 deletions
diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
new file mode 100644
index 000000000000..073e46065d9c
--- /dev/null
+++ b/Documentation/gpu/rfc/gpusvm.rst
@@ -0,0 +1,107 @@
+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+
+===============
+GPU SVM Section
+===============
+
+Agreed upon design principles
+=============================
+
+* migrate_to_ram path
+ * Rely only on core MM concepts (migration PTEs, page references, and
+ page locking).
+ * No driver specific locks other than locks for hardware interaction in
+ this path. These are not required and generally a bad idea to
+ invent driver defined locks to seal core MM races.
+ * An example of a driver-specific lock causing issues occurred before
+ fixing do_swap_page to lock the faulting page. A driver-exclusive lock
+ in migrate_to_ram produced a stable livelock if enough threads read
+ the faulting page.
+ * Partial migration is supported (i.e., a subset of pages attempting to
+ migrate can actually migrate, with only the faulting page guaranteed
+ to migrate).
+ * Driver handles mixed migrations via retry loops rather than locking.
+* Eviction
+ * Eviction is defined as migrating data from the GPU back to the
+ CPU without a virtual address to free up GPU memory.
+ * Only looking at physical memory data structures and locks as opposed to
+ looking at virtual memory data structures and locks.
+ * No looking at mm/vma structs or relying on those being locked.
+ * The rationale for the above two points is that CPU virtual addresses
+ can change at any moment, while the physical pages remain stable.
+ * GPU page table invalidation, which requires a GPU virtual address, is
+ handled via the notifier that has access to the GPU virtual address.
+* GPU fault side
+ * mmap_read only used around core MM functions which require this lock
+ and should strive to take mmap_read lock only in GPU SVM layer.
+ * Big retry loop to handle all races with the mmu notifier under the gpu
+ pagetable locks/mmu notifier range lock/whatever we end up calling
+ those.
+ * Races (especially against concurrent eviction or migrate_to_ram)
+ should not be handled on the fault side by trying to hold locks;
+ rather, they should be handled using retry loops. One possible
+ exception is holding a BO's dma-resv lock during the initial migration
+ to VRAM, as this is a well-defined lock that can be taken underneath
+ the mmap_read lock.
+ * One possible issue with the above approach is if a driver has a strict
+ migration policy requiring GPU access to occur in GPU memory.
+ Concurrent CPU access could cause a livelock due to endless retries.
+ While no current user (Xe) of GPU SVM has such a policy, it is likely
+ to be added in the future. Ideally, this should be resolved on the
+ core-MM side rather than through a driver-side lock.
+* Physical memory to virtual backpointer
+ * This does not work, as no pointers from physical memory to virtual
+ memory should exist. mremap() is an example of the core MM updating
+ the virtual address without notifying the driver of address
+ change rather the driver only receiving the invalidation notifier.
+ * The physical memory backpointer (page->zone_device_data) should remain
+ stable from allocation to page free. Safely updating this against a
+ concurrent user would be very difficult unless the page is free.
+* GPU pagetable locking
+ * Notifier lock only protects range tree, pages valid state for a range
+ (rather than seqno due to wider notifiers), pagetable entries, and
+ mmu notifier seqno tracking, it is not a global lock to protect
+ against races.
+ * All races handled with big retry as mentioned above.
+
+Overview of baseline design
+===========================
+
+Baseline design is simple as possible to get a working basline in which can be
+built upon.
+
+.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
+ :doc: Overview
+ :doc: Locking
+ :doc: Migrataion
+ :doc: Partial Unmapping of Ranges
+ :doc: Examples
+
+Possible future design features
+===============================
+
+* Concurrent GPU faults
+ * CPU faults are concurrent so makes sense to have concurrent GPU
+ faults.
+ * Should be possible with fined grained locking in the driver GPU
+ fault handler.
+ * No expected GPU SVM changes required.
+* Ranges with mixed system and device pages
+ * Can be added if required to drm_gpusvm_get_pages fairly easily.
+* Multi-GPU support
+ * Work in progress and patches expected after initially landing on GPU
+ SVM.
+ * Ideally can be done with little to no changes to GPU SVM.
+* Drop ranges in favor of radix tree
+ * May be desirable for faster notifiers.
+* Compound device pages
+ * Nvidia, AMD, and Intel all have agreed expensive core MM functions in
+ migrate device layer are a performance bottleneck, having compound
+ device pages should help increase performance by reducing the number
+ of these expensive calls.
+* Higher order dma mapping for migration
+ * 4k dma mapping adversely affects migration performance on Intel
+ hardware, higher order (2M) dma mapping should help here.
+* Build common userptr implementation on top of GPU SVM
+* Driver side madvise implementation and migration policies
+* Pull in pending dma-mapping API changes from Leon / Nvidia when these land
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 476719771eef..396e535377fb 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -18,6 +18,10 @@ host such documentation:
.. toctree::
+ gpusvm.rst
+
+.. toctree::
+
i915_gem_lmem.rst
.. toctree::