aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/i915/gem/i915_gem_pages.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-08-29Merge drm/drm-next into drm-intel-nextJani Nikula1-9/+16
Sync drm-intel-next with v6.0-rc as well as recent drm-intel-gt-next. Since drm-next does not have commit f0c70d41e4e8 ("drm/i915/guc: remove runtime info printing from time stamp logging") yet, only drm-intel-gt-next, will need to do that as part of the merge here to build. Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2022-08-29drm/i915: split gem quirks from display quirksJani Nikula1-1/+1
The lone gem quirk is an outlier, not even handled by the common quirk code. Split it to a separate gem_quirks member. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/fe9c0cb1e49da0ddc31d24c996af5fd09bce3042.1661346845.git.jani.nikula@intel.com
2022-08-24drm/i915: move page_sizes to runtime infoJani Nikula1-1/+1
If it's modified runtime, it's runtime info. Signed-off-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Maarten Lankhort <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/f6825dd97d2ba63aa395c30131c4b9e6ef32b0c8.1660910433.git.jani.nikula@intel.com
2022-08-08drm/i915/gt: Batch TLB invalidationsChris Wilson1-8/+13
Invalidate TLB in batches, in order to reduce performance regressions. Currently, every caller performs a full barrier around a TLB invalidation, ignoring all other invalidations that may have already removed their PTEs from the cache. As this is a synchronous operation and can be quite slow, we cause multiple threads to contend on the TLB invalidate mutex blocking userspace. We only need to invalidate the TLB once after replacing our PTE to ensure that there is no possible continued access to the physical address before releasing our pages. By tracking a seqno for each full TLB invalidate we can quickly determine if one has been performed since rewriting the PTE, and only if necessary trigger one for ourselves. That helps to reduce the performance regression introduced by TLB invalidate logic. [mchehab: rebased to not require moving the code to a separate file] Cc: stable@vger.kernel.org Fixes: 7938d61591d3 ("drm/i915: Flush TLBs before releasing backing store") Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Chris Wilson <chris.p.wilson@intel.com> Cc: Fei Yang <fei.yang@intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/4e97ef5deb6739cadaaf40aa45620547e9c4ec06.1658924372.git.mchehab@kernel.org (cherry picked from commit 5d36acb7198b0e5eb88e6b701f9ad7b9448f8df9) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2022-08-08drm/i915/gt: Ignore TLB invalidations on idle enginesChris Wilson1-4/+6
Check if the device is powered down prior to any engine activity, as, on such cases, all the TLBs were already invalidated, so an explicit TLB invalidation is not needed, thus reducing the performance regression impact due to it. This becomes more significant with GuC, as it can only do so when the connection to the GuC is awake. Cc: stable@vger.kernel.org Fixes: 7938d61591d3 ("drm/i915: Flush TLBs before releasing backing store") Signed-off-by: Chris Wilson <chris.p.wilson@intel.com> Cc: Fei Yang <fei.yang@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/278a57a672edac75683f0818b292e95da583a5fe.1658924372.git.mchehab@kernel.org (cherry picked from commit 4bedceaed1ae1172cfe72d3ff752b3a1d32fe4d9) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2022-02-28drm/i915: add I915_BO_ALLOC_GPU_ONLYMatthew Auld1-0/+3
If the user doesn't require CPU access for the buffer, then ALLOC_GPU_ONLY should be used, in order to prioritise allocating in the non-mappable portion of LMEM, on devices with small BAR. v2(Thomas): - The BO_ALLOC_TOPDOWN naming here is poor, since this is pure lies on systems that don't even have small BAR. A better name is GPU_ONLY, which is accurate regardless of the configuration. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220225145502.331818-3-matthew.auld@intel.com
2022-02-23Merge tag 'drm-intel-gt-next-2022-02-17' of git://anongit.freedesktop.org/drm/drm-intel into drm-intel-nextRodrigo Vivi1-10/+0
UAPI Changes: - Weak parallel submission support for execlists Minimal implementation of the parallel submission support for execlists backend that was previously only implemented for GuC. Support one sibling non-virtual engine. Core Changes: - Two backmerges of drm/drm-next for header file renames/changes and i915_regs reorganization Driver Changes: - Add new DG2 subplatform: DG2-G12 (Matt R) - Add new DG2 workarounds (Matt R, Ram, Bruce) - Handle pre-programmed WOPCM registers for DG2+ (Daniele) - Update guc shim control programming on XeHP SDV+ (Daniele) - Add RPL-S C0/D0 stepping information (Anusha) - Improve GuC ADS initialization to work on ARM64 on dGFX (Lucas) - Fix KMD and GuC race on accessing PMU busyness (Umesh) - Use PM timestamp instead of RING TIMESTAMP for reference in PMU with GuC (Umesh) - Report error on invalid reset notification from GuC (John) - Avoid WARN splat by holding RPM wakelock during PXP unbind (Juston) - Fixes to parallel submission implementation (Matt B.) - Improve GuC loading status check/error reports (John) - Tweak TTM LRU priority hint selection (Matt A.) - Align the plane_vma to min_page_size of stolen mem (Ram) - Introduce vma resources and implement async unbinding (Thomas) - Use struct vma_resource instead of struct vma_snapshot (Thomas) - Return some TTM accel move errors instead of trying memcpy move (Thomas) - Fix a race between vma / object destruction and unbinding (Thomas) - Remove short-term pins from execbuf (Maarten) - Update to GuC version 69.0.3 (John, Michal Wa.) - Improvements to GT reset paths in GuC backend (Matt B.) - Use shrinker_release_pages instead of writeback in shmem object hooks (Matt A., Tvrtko) - Use trylock instead of blocking lock when freeing GEM objects (Maarten) - Allocate intel_engine_coredump_alloc with ALLOW_FAIL (Matt B.) - Fixes to object unmapping and purging (Matt A) - Check for wedged device in GuC backend (John) - Avoid lockdep splat by locking dpt_obj around set_cache_level (Maarten) - Allow dead vm to unbind vma's without lock (Maarten) - s/engine->i915/i915/ for DG2 engine workarounds (Matt R) - Use to_gt() helper for GGTT accesses (Michal Wi.) - Selftest improvements (Matt B., Thomas, Ram) - Coding style and compiler warning fixes (Matt B., Jasmine, Andi, Colin, Gustavo, Dan) From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Yg4i2aCZvvee5Eai@jlahtine-mobl.ger.corp.intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> [Fixed conflicts while applying, using the fixups/drm-intel-gt-next.patch from drm-rerere's 1f2b1742abdd ("2022y-02m-23d-16h-07m-57s UTC: drm-tip rerere cache update")]
2022-02-14drm/i915: don't include drm_cache.h in i915_drv.hJani Nikula1-0/+2
Include it only in files that use it. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/14edab4a193ea3f73f387a88e3836c8555401871.1644507885.git.jani.nikula@intel.com
2022-02-03Merge drm/drm-next into drm-intel-gt-nextJoonas Lahtinen1-0/+10
Backmerge to bring in 5.17-rc2 to introduce a common baseline to merge i915_regs changes from drm-intel-next. Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2022-01-25drm/i915: Flush TLBs before releasing backing storeTvrtko Ursulin1-0/+10
We need to flush TLBs before releasing backing store otherwise userspace is able to encounter stale entries if a) it is not declaring access to certain buffers and b) it races with the backing store release from a such undeclared execution already executing on the GPU in parallel. The approach taken is to mark any buffer objects which were ever bound to the GPU and to trigger a serialized TLB flush when their backing store is released. Alternatively the flushing could be done on VMA unbind, at which point we would be able to ascertain whether there is potential a parallel GPU execution (which could race), but essentially it boils down to paying the cost of TLB flushes potentially needlessly at VMA unbind time (when the backing store is not known to be going away so not needed for safety), versus potentially needlessly at backing store relase time (since we at that point cannot tell whether there is anything executing on the GPU which uses that object). Thereforce simplicity of implementation has been chosen for now with scope to benchmark and refine later as required. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reported-by: Sushma Venkatesh Reddy <sushma.venkatesh.reddy@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Dave Airlie <airlied@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-10drm/i915: don't call free_mmap_offset when purgingMatthew Auld1-1/+0
The TTM backend is in theory the only user here(also purge should only be called once we have dropped the pages), where it is setup at object creation and is only removed once the object is destroyed. Also resetting the node here might be iffy since the ttm fault handler uses the stored fake offset to determine the page offset within the pages array. This also blows up in the dontneed-before-mmap test, since the expectation is that the vma_node will live on, until the object is destroyed: <2> [749.062902] kernel BUG at drivers/gpu/drm/i915/gem/i915_gem_ttm.c:943! <4> [749.062923] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI <4> [749.062928] CPU: 0 PID: 1643 Comm: gem_madvise Tainted: G U W 5.16.0-rc8-CI-CI_DRM_11046+ #1 <4> [749.062933] Hardware name: Gigabyte Technology Co., Ltd. GB-Z390 Garuda/GB-Z390 Garuda-CF, BIOS IG1c 11/19/2019 <4> [749.062937] RIP: 0010:i915_ttm_mmap_offset.cold.35+0x5b/0x5d [i915] <4> [749.063044] Code: 00 48 c7 c2 a0 23 4e a0 48 c7 c7 26 df 4a a0 e8 95 1d d0 e0 bf 01 00 00 00 e8 8b ec cf e0 31 f6 bf 09 00 00 00 e8 5f 30 c0 e0 <0f> 0b 48 c7 c1 24 4b 56 a0 ba 5b 03 00 00 48 c7 c6 c0 23 4e a0 48 <4> [749.063052] RSP: 0018:ffffc90002ab7d38 EFLAGS: 00010246 <4> [749.063056] RAX: 0000000000000240 RBX: ffff88811f2e61c0 RCX: 0000000000000006 <4> [749.063060] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000009 <4> [749.063063] RBP: ffffc90002ab7e58 R08: 0000000000000001 R09: 0000000000000001 <4> [749.063067] R10: 000000000123d0f8 R11: ffffc90002ab7b20 R12: ffff888112a1a000 <4> [749.063071] R13: 0000000000000004 R14: ffff88811f2e61c0 R15: ffff888112a1a000 <4> [749.063074] FS: 00007f6e5fcad500(0000) GS:ffff8884ad600000(0000) knlGS:0000000000000000 <4> [749.063078] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [749.063081] CR2: 00007efd264e39f0 CR3: 0000000115fd6005 CR4: 00000000003706f0 <4> [749.063085] Call Trace: <4> [749.063087] <TASK> <4> [749.063089] __assign_mmap_offset+0x41/0x300 [i915] <4> [749.063171] __assign_mmap_offset_handle+0x159/0x270 [i915] <4> [749.063248] ? i915_gem_dumb_mmap_offset+0x70/0x70 [i915] <4> [749.063325] drm_ioctl_kernel+0xae/0x140 <4> [749.063330] drm_ioctl+0x201/0x3d0 <4> [749.063333] ? i915_gem_dumb_mmap_offset+0x70/0x70 [i915] <4> [749.063409] ? do_user_addr_fault+0x200/0x670 <4> [749.063415] __x64_sys_ioctl+0x6d/0xa0 <4> [749.063419] do_syscall_64+0x3a/0xb0 <4> [749.063423] entry_SYSCALL_64_after_hwframe+0x44/0xae <4> [749.063428] RIP: 0033:0x7f6e5f100317 Testcase: igt/gem_madvise/dontneed-before-mmap Fixes: cf3e3e86d779 ("drm/i915: Use ttm mmap handling for ttm bo's.") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220106174910.280616-1-matthew.auld@intel.com (cherry picked from commit 658a0c632625e1db51837ff754fe18a6a7f2ccf8) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-01-10drm/i915: don't call free_mmap_offset when purgingMatthew Auld1-1/+0
The TTM backend is in theory the only user here(also purge should only be called once we have dropped the pages), where it is setup at object creation and is only removed once the object is destroyed. Also resetting the node here might be iffy since the ttm fault handler uses the stored fake offset to determine the page offset within the pages array. This also blows up in the dontneed-before-mmap test, since the expectation is that the vma_node will live on, until the object is destroyed: <2> [749.062902] kernel BUG at drivers/gpu/drm/i915/gem/i915_gem_ttm.c:943! <4> [749.062923] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI <4> [749.062928] CPU: 0 PID: 1643 Comm: gem_madvise Tainted: G U W 5.16.0-rc8-CI-CI_DRM_11046+ #1 <4> [749.062933] Hardware name: Gigabyte Technology Co., Ltd. GB-Z390 Garuda/GB-Z390 Garuda-CF, BIOS IG1c 11/19/2019 <4> [749.062937] RIP: 0010:i915_ttm_mmap_offset.cold.35+0x5b/0x5d [i915] <4> [749.063044] Code: 00 48 c7 c2 a0 23 4e a0 48 c7 c7 26 df 4a a0 e8 95 1d d0 e0 bf 01 00 00 00 e8 8b ec cf e0 31 f6 bf 09 00 00 00 e8 5f 30 c0 e0 <0f> 0b 48 c7 c1 24 4b 56 a0 ba 5b 03 00 00 48 c7 c6 c0 23 4e a0 48 <4> [749.063052] RSP: 0018:ffffc90002ab7d38 EFLAGS: 00010246 <4> [749.063056] RAX: 0000000000000240 RBX: ffff88811f2e61c0 RCX: 0000000000000006 <4> [749.063060] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000009 <4> [749.063063] RBP: ffffc90002ab7e58 R08: 0000000000000001 R09: 0000000000000001 <4> [749.063067] R10: 000000000123d0f8 R11: ffffc90002ab7b20 R12: ffff888112a1a000 <4> [749.063071] R13: 0000000000000004 R14: ffff88811f2e61c0 R15: ffff888112a1a000 <4> [749.063074] FS: 00007f6e5fcad500(0000) GS:ffff8884ad600000(0000) knlGS:0000000000000000 <4> [749.063078] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [749.063081] CR2: 00007efd264e39f0 CR3: 0000000115fd6005 CR4: 00000000003706f0 <4> [749.063085] Call Trace: <4> [749.063087] <TASK> <4> [749.063089] __assign_mmap_offset+0x41/0x300 [i915] <4> [749.063171] __assign_mmap_offset_handle+0x159/0x270 [i915] <4> [749.063248] ? i915_gem_dumb_mmap_offset+0x70/0x70 [i915] <4> [749.063325] drm_ioctl_kernel+0xae/0x140 <4> [749.063330] drm_ioctl+0x201/0x3d0 <4> [749.063333] ? i915_gem_dumb_mmap_offset+0x70/0x70 [i915] <4> [749.063409] ? do_user_addr_fault+0x200/0x670 <4> [749.063415] __x64_sys_ioctl+0x6d/0xa0 <4> [749.063419] do_syscall_64+0x3a/0xb0 <4> [749.063423] entry_SYSCALL_64_after_hwframe+0x44/0xae <4> [749.063428] RIP: 0033:0x7f6e5f100317 Testcase: igt/gem_madvise/dontneed-before-mmap Fixes: cf3e3e86d779 ("drm/i915: Use ttm mmap handling for ttm bo's.") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220106174910.280616-1-matthew.auld@intel.com
2022-01-10drm/i915: remove writeback hookMatthew Auld1-10/+0
Ditch the writeback hook and drop i915_gem_object_writeback(). We already support the shrinker_release_pages hook which can just call shmem_writeback directly. Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211215110746.865-1-matthew.auld@intel.com
2021-12-14drm/i915: replace X86_FEATURE_PAT with pat_enabled()Lucas De Marchi1-2/+1
PAT can be disabled on boot with "nopat" in the command line. Replace one x86-ism with another, which is slightly more correct to prepare for supporting other architectures. Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211202003048.1015511-1-lucas.demarchi@intel.com
2021-11-25drm/i915: Add support for moving fence waitingMaarten Lankhorst1-0/+6
For now, we will only allow async migration when TTM is used, so the paths we care about are related to TTM. The mmap path is handled by having the fence in ttm_bo->moving, when pinning, the binding only becomes available after the moving fence is signaled, and pinning a cpu map will only work after the moving fence signals. This should close all holes where userspace can read a buffer before it's fully migrated. v2: - Fix a couple of SPARSE warnings v3: - Fix a NULL pointer dereference v4: - Ditch the moving fence waiting for i915_vma_pin_iomap() and replace with a verification that the vma is already bound. (Matthew Auld) - Squash with a previous patch introducing moving fence waiting and accessing interfaces (Matthew Auld) - Rename to indicated that we also add support for sync waiting. v5: - Fix check for NULL and unreferencing i915_vma_verify_bind_complete() (Matthew Auld) - Fix compilation failure if !CONFIG_DRM_I915_DEBUG_GEM - Fix include ordering. (Matthew Auld) v7: - Fix yet another compilation failure with clang if !CONFIG_DRM_I915_DEBUG_GEM Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-2-thomas.hellstrom@linux.intel.com
2021-11-02drm/i915: stop setting cache_dirty on discreteMatthew Auld1-0/+1
Should not be needed. Even with non-coherent display, we should be using device local-memory there, and not system memory. v2: also add a warning in i915_gem_clflush_object Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20211027161813.3094681-4-matthew.auld@intel.com
2021-10-22drm/i915/ttm: move shrinker management into adjust_lruMatthew Auld1-2/+3
We currently just evict lmem objects to system memory when under memory pressure. For this case we might lack the usual object mm.pages, which effectively hides the pages from the i915-gem shrinker, until we actually "attach" the TT to the object, or in the case of lmem-only objects it just gets migrated back to lmem when touched again. For all cases we can just adjust the i915 shrinker LRU each time we also adjust the TTM LRU. The two cases we care about are: 1) When something is moved by TTM, including when initially populating an object. Importantly this covers the case where TTM moves something from lmem <-> smem, outside of the normal get_pages() interface, which should still ensure the shmem pages underneath are reclaimable. 2) When calling into i915_gem_object_unlock(). The unlock should ensure the object is removed from the shinker LRU, if it was indeed swapped out, or just purged, when the shrinker drops the object lock. v2(Thomas): - Handle managing the shrinker LRU in adjust_lru, where it is always safe to touch the object. v3(Thomas): - Pretty much a re-write. This time piggy back off the shrink_pin stuff, which actually seems to fit quite well for what we want here. v4(Thomas): - Just use a simple boolean for tracking ttm_shrinkable. v5: - Ensure we call adjust_lru when faulting the object, to ensure the pages are visible to the shrinker, if needed. - Add back the adjust_lru when in i915_ttm_move (Thomas) v6(Reported-by: kernel test robot <lkp@intel.com>): - Remove unused i915_tt Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> #v4 Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-6-matthew.auld@intel.com
2021-10-22drm/i915/ttm: add tt shmem backendMatthew Auld1-2/+4
For cached objects we can allocate our pages directly in shmem. This should make it possible(in a later patch) to utilise the existing i915-gem shrinker code for such objects. For now this is still disabled. v2(Thomas): - Add optional try_to_writeback hook for objects. Importantly we need to check if the object is even still shrinkable; in between us dropping the shrinker LRU lock and acquiring the object lock it could for example have been moved. Also we need to differentiate between "lazy" shrinking and the immediate writeback mode. Also later we need to handle objects which don't even have mm.pages, so bundling this into put_pages() would require somehow handling that edge case, hence just letting the ttm backend handle everything in try_to_writeback doesn't seem too bad. v3(Thomas): - Likely a bad idea to touch the object from the unpopulate hook, since it's not possible to hold a reference, without also creating circular dependency, so likely this is too fragile. For now just ensure we at least mark the pages as dirty/accessed when called from the shrinker on WILLNEED objects. - s/try_to_writeback/shrinker_release_pages, since this can do more than just writeback. - Get rid of do_backup boolean and just set the SWAPPED flag prior to calling unpopulate. - Keep shmem_tt as lowest priority for the TTM LRU bo_swapout walk, since these just get skipped anyway. We can try to come up with something better later. v4(Thomas): - s/PCI_DMA/DMA/. Also drop NO_KERNEL_MAPPING and NO_WARN, which apparently doesn't do anything with streaming mappings. - Just pass along the error for ->truncate, and assume nothing. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Christian König <christian.koenig@amd.com> Cc: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Oak Zeng <oak.zeng@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-2-matthew.auld@intel.com
2021-07-16drm/i915: Remove allow_alloc from i915_gem_object_get_sg*Jason Ekstrand1-16/+4
This reverts the rest of 0edbb9ba1bfe ("drm/i915: Move cmd parser pinning to execbuffer"). Now that the only user of i915_gem_object_get_sg without allow_alloc has been removed, we can drop the parameter. This portion of the revert was broken into its own patch to aid review. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210714193419.1459723-4-jason@jlekstrand.net
2021-07-09drm/i915: use consistent CPU mappings for pin_map usersMatthew Auld1-2/+29
For discrete, users of pin_map() needs to obey the same rules at the TTM backend, where we map system only objects as WB, and everything else as WC. The simplest for now is to just force the correct mapping type as per the new rules for discrete. Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210705135310.1502437-1-matthew.auld@intel.com
2021-06-24drm/i915: Update object placement flags to be mutableThomas Hellström1-1/+1
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by much of our code. Introduce a new mem_flags member to hold these and make sure checks for these flags being set are either done under the object lock or with pages properly pinned. The flags will change during migration under the object lock. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-2-thomas.hellstrom@linux.intel.com
2021-06-11drm/i915: Use ttm mmap handling for ttm bo's.Maarten Lankhorst1-2/+1
Use the ttm handlers for servicing page faults, and vm_access. We do our own validation of read-only access, otherwise use the ttm handlers as much as possible. Because the ttm handlers expect the vma_node at vma->base, we slightly need to massage the mmap handlers to look at vma_node->driver_private to fetch the bo, if it's NULL, we assume i915's normal mmap_offset uapi is used. This is the easiest way to achieve compatibility without changing ttm's semantics. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-5-thomas.hellstrom@linux.intel.com
2021-06-02drm/i915/ttm Initialize the ttm device and memory managersThomas Hellström1-1/+2
Temporarily remove the buddy allocator and related selftests and hook up the TTM range manager for i915 regions. Also modify the mock region selftests somewhat to account for a fragmenting manager. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210602083818.241793-2-thomas.hellstrom@linux.intel.com
2021-05-17drm/i915/gem: Pin the L-shape quirked object as unshrinkableChris Wilson1-0/+2
When instantiating a tiled object on an L-shaped memory machine, we mark the object as unshrinkable to prevent the shrinker from trying to swap out the pages. We have to do this as we do not know the swizzling on the individual pages, and so the data will be scrambled across swap out/in. Not only do we need to move the object off the shrinker list, we need to mark the object with shrink_pin so that the counter is consistent across calls to madvise. v2: in the madvise ioctl we need to check if the object is currently shrinkable/purgeable, not if the object type supports shrinking Fixes: 0175969e489a ("drm/i915/gem: Use shrinkable status for unknown swizzle quirks") References: https://gitlab.freedesktop.org/drm/intel/-/issues/3293 References: https://gitlab.freedesktop.org/drm/intel/-/issues/3450 Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.12+ Link: https://patchwork.freedesktop.org/patch/msgid/20210517084640.18862-1-matthew.auld@intel.com
2021-03-24drm/i915: Finally remove obj->mm.lock.Maarten Lankhorst1-34/+9
With all callers and selftests fixed to use ww locking, we can now finally remove this lock. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-62-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Lock ww in ucode objects correctlyMaarten Lankhorst1-0/+20
In the ucode functions, the calls are done before userspace runs, when debugging using debugfs, or when creating semi-permanent mappings; we can safely use the unlocked versions that does the ww dance for us. Because there is no pin_pages_unlocked yet, add it as convenience function. This removes possible lockdep splats about missing resv lock for ucode. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-37-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Prepare for obj->mm.lock removal, v2.Thomas Hellström1-2/+12
Stolen objects need to lock, and we may call put_pages when refcount drops to 0, ensure all calls are handled correctly. Changes since v1: - Rebase on top of upstream changes. Idea-from: Thomas Hellström <thomas.hellstrom@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-33-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Fix workarounds selftest, part 1Maarten Lankhorst1-0/+12
pin_map needs the ww lock, so ensure we pin both before submission. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Again pick older version just to side-step conflicts.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210128162612.927917-32-maarten.lankhorst@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-32-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Flatten obj->mm.lockMaarten Lankhorst1-17/+17
With userptr fixed, there is no need for all separate lockdep classes now, and we can remove all lockdep tricks used. A trylock in the shrinker is all we need now to flatten the locking hierarchy. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: Resolve conflict because we don't have the patch from Chris to rebrand i915_gem_shrinker_taints_mutex to fs_reclaim_taints_mutex. It's not a bad idea, but if we do it, it should be moved to the right header. See https://lore.kernel.org/intel-gfx/20210202154318.19246-1-chris@chris-wilson.co.uk/] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-18-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v7.Maarten Lankhorst1-1/+1
Instead of doing what we do currently, which will never work with PROVE_LOCKING, do the same as AMD does, and something similar to relocation slowpath. When all locks are dropped, we acquire the pages for pinning. When the locks are taken, we transfer those pages in .get_pages() to the bo. As a final check before installing the fences, we ensure that the mmu notifier was not called; if it is, we return -EAGAIN to userspace to signal it has to start over. Changes since v1: - Unbinding is done in submit_init only. submit_begin() removed. - MMU_NOTFIER -> MMU_NOTIFIER Changes since v2: - Make i915->mm.notifier a spinlock. Changes since v3: - Add WARN_ON if there are any page references left, should have been 0. - Return 0 on success in submit_init(), bug from spinlock conversion. - Release pvec outside of notifier_lock (Thomas). Changes since v4: - Mention why we're clearing eb->[i + 1].vma in the code. (Thomas) - Actually check all invalidations in eb_move_to_gpu. (Thomas) - Do not wait when process is exiting to fix gem_ctx_persistence.userptr. Changes since v5: - Clarify why check on PF_EXITING is (temporarily) required. Changes since v6: - Ensure userptr validity is checked in set_domain through a special path. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: Dave Airlie <airlied@redhat.com> [danvet: s/kfree/kvfree/ in i915_gem_object_userptr_drop_ref in the previous review round, but which got lost. The other open questions around page refcount are imo better discussed in a separate series, with amdgpu folks involved]. Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-17-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Move HAS_STRUCT_PAGE to obj->flagsMaarten Lankhorst1-3/+2
We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: apply with wiggle] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
2021-03-24drm/i915: Move cmd parser pinning to execbufferMaarten Lankhorst1-4/+17
We need to get rid of allocations in the cmd parser, because it needs to be called from a signaling context, first move all pinning to execbuf, where we already hold all locks. Allocate jump_whitelist in the execbuffer, and add annotations around intel_engine_cmd_parser(), to ensure we only call the command parser without allocating any memory, or taking any locks we're not supposed to. Because i915_gem_object_get_page() may also allocate memory, add a path to i915_gem_object_get_sg() that prevents memory allocations, and walk the sg list manually. It should be similarly fast. This has the added benefit of being able to catch all memory allocation errors before the point of no return, and return -ENOMEM safely to the execbuf submitter. Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-4-maarten.lankhorst@linux.intel.com
2021-01-20drm/i915/gem: Use shrinkable status for unknown swizzle quirksChris Wilson1-8/+11
Give obj->mm.quirked a name much more reflective of its purpose (i915_gem_object_has_tiling_quirk) and move it from the obj->mm field as it doesn't denote a quirk of the backing store, but a quirk in the object in its treatment of the backing pages, similar to tiling modes. Then instead of abusing the pinned status of the buffer to protect it from the shrinker, we can instead hide the buffer from the shrinker so it is never considered for being swapped. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-4-chris@chris-wilson.co.uk
2020-12-02drm/i915/gem: Report error for vmap() failureChris Wilson1-2/+4
Convert the NULL pointer from a failed vmap() to ERR_PTR(-ENOMEM) for propagation. <1> [269.830447] BUG: kernel NULL pointer dereference, address: 0000000000000000 <1> [269.830455] #PF: supervisor write access in kernel mode <1> [269.830457] #PF: error_code(0x0002) - not-present page <6> [269.830459] PGD 0 P4D 0 <4> [269.830465] Oops: 0002 [#1] PREEMPT SMP PTI <4> [269.830469] CPU: 3 PID: 5789 Comm: i915_selftest Tainted: G U 5.10.0-rc6-CI-CI_DRM_9412+ #1 <4> [269.830472] Hardware name: Intel Corp. Geminilake/GLK RVP2 LP4SD (07), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017 <4> [269.830636] RIP: 0010:igt_client_fill+0x1b9/0x5f0 [i915] <4> [269.830640] Code: e8 0c 32 02 00 48 89 c5 48 3d 00 f0 ff ff 0f 87 e9 02 00 00 48 8b 8b 78 06 00 00 44 89 f0 48 89 ef 35 af be ad de 48 c1 e9 02 <f3> ab 0f b6 83 80 03 00 00 89 c2 c0 ea 03 83 e2 02 75 09 83 c8 20 <4> [269.830642] RSP: 0018:ffffc900007a79e8 EFLAGS: 00010206 <4> [269.830645] RAX: 00000000df0bf37b RBX: ffff88811d8af3c0 RCX: 00000000010afc00 <4> [269.830647] RDX: 0000000000000000 RSI: ffffffff822f2b17 RDI: 0000000000000000 <4> [269.830648] RBP: 0000000000000000 R08: ffff888111c80930 R09: 00000000fffffffe <4> [269.830650] R10: 0000000000000000 R11: 00000000ffbc70e4 R12: ffff88811090f700 <4> [269.830652] R13: ffff88810df60180 R14: 0000000001a64dd4 R15: 0000000000000000 <4> [269.830655] FS: 00007f137b07de40(0000) GS:ffff88817b980000(0000) knlGS:0000000000000000 <4> [269.830657] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [269.830659] CR2: 0000000000000000 CR3: 0000000115984000 CR4: 0000000000350ee0 <4> [269.830661] Call Trace: <4> [269.830780] __i915_subtests.cold.7+0x42/0x92 [i915] <4> [269.830886] ? __i915_nop_teardown+0x10/0x10 [i915] <4> [269.830989] ? __i915_live_setup+0x30/0x30 [i915] <4> [269.831104] __run_selftests.part.3+0xf7/0x14c [i915] <4> [269.831939] i915_live_selftests.cold.5+0x1f/0x47 [i915] <4> [269.832027] i915_pci_probe+0x93/0x1d0 [i915] <4> [269.832037] ? _raw_spin_unlock_irqrestore+0x2f/0x50 <4> [269.832043] pci_device_probe+0x9e/0x110 <4> [269.832049] really_probe+0x1c4/0x430 <4> [269.832053] driver_probe_device+0xd9/0x140 <4> [269.832056] device_driver_attach+0x4a/0x50 <4> [269.832059] __driver_attach+0x83/0x140 <4> [269.832062] ? device_driver_attach+0x50/0x50 <4> [269.832064] ? device_driver_attach+0x50/0x50 <4> [269.832067] bus_for_each_dev+0x75/0xc0 <4> [269.832070] bus_add_driver+0x14b/0x1f0 <4> [269.832073] driver_register+0x66/0xb0 <4> [269.832160] i915_init+0x70/0x87 [i915] <4> [269.832164] ? 0xffffffffa05e3000 <4> [269.832168] do_one_initcall+0x56/0x2e0 <4> [269.832174] ? kmem_cache_alloc_trace+0x6a4/0x770 <4> [269.832180] do_init_module+0x55/0x200 <4> [269.832184] load_module+0x22a2/0x2480 <4> [269.832191] ? __do_sys_finit_module+0xad/0x110 <4> [269.832194] __do_sys_finit_module+0xad/0x110 <4> [269.832199] do_syscall_64+0x33/0x80 <4> [269.832202] entry_SYSCALL_64_after_hwframe+0x44/0xa9 <4> [269.832204] RIP: 0033:0x7f137a718839 <4> [269.832208] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48 <4> [269.832210] RSP: 002b:00007ffc4267d308 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 <4> [269.832214] RAX: ffffffffffffffda RBX: 000056288b88f0d0 RCX: 00007f137a718839 <4> [269.832216] RDX: 0000000000000000 RSI: 000056288b895850 RDI: 0000000000000007 <4> [269.832218] RBP: 000056288b895850 R08: 312d3d7374736574 R09: 000056288b88c020 <4> [269.832220] R10: 00007ffc4267d450 R11: 0000000000000246 R12: 0000000000000000 <4> [269.832222] R13: 000056288b8877a0 R14: 0000000000000020 R15: 0000000000000045 <4> [269.832226] Modules linked in: i915(+) vgem mei_hdcp snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel cdc_ether usbnet snd_intel_dspcfg mii snd_hda_codec snd_hwdep snd_hda_core r8169 snd_pcm realtek mei_me mei prime_numbers intel_lpss_pci i2c_hid pinctrl_geminilake [last unloaded: i915] <4> [269.832264] CR2: 0000000000000000 Fixes: cb2ce93e5b05 ("drm/i915/gem: Differentiate oom failures from invalid map types") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201201215441.31900-1-chris@chris-wilson.co.uk
2020-11-30drm/i915/gem: Differentiate oom failures from invalid map typesChris Wilson1-14/+12
After a cursory check on the parameters to i915_gem_object_pin_map(), where we return a precise error, if the backend rejects the mapping we always return PTR_ERR(-ENOMEM). Let us also return a more precise error here so we can differentiate between running out of memory and programming errors (or situations where we may be trying different paths and looking for an error from an unsupported map). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201127195334.13134-1-chris@chris-wilson.co.uk
2020-11-13Merge tag 'drm-intel-gt-next-2020-11-12-1' of git://anongit.freedesktop.org/drm/drm-intel into drm-nextDave Airlie1-8/+13
Cross-subsystem Changes: - DMA mapped scatterlist fixes in i915 to unblock merging of https://lkml.org/lkml/2020/9/27/70 (Tvrtko, Tom) Driver Changes: - Fix for user reported issue #2381 (Graphical output stops with "switching to inteldrmfb from simple"): Mark ininitial fb obj as WT on eLLC machines to avoid rcu lockup during fbdev init (Ville, Chris) - Fix for Tigerlake (and earlier) to avoid spurious empty CSB events leading to hang (Chris, Bruce) - Delay execlist processing for Tigerlake to avoid hang (Chris) - Fix for Tigerlake RCS engine health check through heartbeat (Chris) - Fix for Tigerlake reserved MOCS entries (Ayaz, Chris) - Fix Media power gate sequence on Tigerlake (Rodrigo) - Enable eLLC caching of display buffers for SKL+ (Ville) - Support parsing of oversize batches on Gen9 (Matt, Chris) - Exclude low pages (128KiB) of stolen from use to avoid thrashing during reset (Chris) - Flush engines before Tigerlake breadcrumbs (Chris) - Use the local HWSP offset during submission (Chris) - Flush coherency domains on first set-domain-ioctl (Chris, Zbigniew) - Use the active reference on the vma while capturing to avoid use-after-free (Chris) - Fix MOCS PTE setting for gen9+ (Ville) - Avoid NULL dereference on IPS driver callback while unbinding i915 (Chris) - Avoid NULL dereference from PT/PD stash allocation error (Matt) - Hold request reference for canceling an active context (Chris) - Avoid infinite loop on x86-32 when mapping a lot of objects (Chris) - Disallow WC mappings when processor doesn't support them (Chris) - Return correct error in i915_gem_object_copy_blt() error path (Dan) - Return correct error in intel_context_create_request() error path (Maarten) - Tune down GuC communication enabled/disabled messages to debug (Jani) - Fix rebased commit "Remove i915_request.lock requirement for execution callbacks" (Chris) - Cancel outstanding work after disabling heartbeats on an engine (Chris) - Signal cancelled requests (Chris) - Retire cancelled requests on unload (Chris) - Scrub HW state on driver remove (Chris) - Undo forced context restores after trivial preemptions (Chris) - Handle PCI unbind in PMU code (Tvrtko) - Fix CPU hotplug with multiple GPUs in PMU code (Trtkko) - Correctly set SFC capability for video engines (Venkata) - Update GuC code to use firmware v49.0.1 (John, Matthew B., Daniele, Oscar, Michel, Rodrigo, Michal) - Improve GuC warnings on loading failure (John) - Avoid ownership race in buffer pool by clearing age (Chris) - Use MMIO to read CSB in case of failure (Chris, Mika) - Show engine properties in engine state dump to indicate changes (Chris, Joonas) - Break up error capture compression loops with cond_resched() (Chris) - Reduce GPU error capture mutex hold time to avoid khungtaskd (Chris) - Serialise debugfs i915_gem_objects with ctx->mutex (Chris) - Always test execution status on closing the context and close if not persistent (Chris) - Avoid mixing integer types during batch copies (Chris, Jared) - Skip over MI_NOOP when parsing to avoid overhead (Chris) - Hold onto an explicit ref to i915_vma_work.pinned (Chris) - Perform all asynchronous waits prior to marking payload start (Chris) - Pull phys pread/pwrite implementations to the backend (Matt) - Improve record of hung engines in error state (Tvrtko) - Allow backends to override pread implementation (Matt) - Reinforce LRC poisoning checks to confirm context survives execution (Chris) - Fix memory region max size calculation (Matt) - Fix order when adding blocks to memory region (Matt) - Eliminate unused intel_virtual_engine_get_sibling func (Chris) - Cleanup kasan warning for on-stack (unsigned long) casting (Chris) - Onion unwind for scratch page allocation failure (Chris) - Poison stolen pages before use (Chris) - Selftest improvements (Chris) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201112163407.GA20320@jlahtine-mobl.ger.corp.intel.com
2020-10-18drm/i915: use vmap in i915_gem_object_mapChristoph Hellwig1-68/+59
i915_gem_object_map implements fairly low-level vmap functionality in a driver. Split it into two helpers, one for remapping kernel memory which can use vmap, and one for I/O memory that uses vmap_pfn. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here for the kernel memory case (and could be added to vmap using a flag if actually required). Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Juergen Gross <jgross@suse.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Link: https://lkml.kernel.org/r/20201002122204.1534411-9-hch@lst.de Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-18drm/i915: stop using kmap in i915_gem_object_mapChristoph Hellwig1-5/+2
kmap for !PageHighmem is just a convoluted way to say page_address, and kunmap is a no-op in that case. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Juergen Gross <jgross@suse.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Link: https://lkml.kernel.org/r/20201002122204.1534411-8-hch@lst.de Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-06drm/i915: Fix DMA mapped scatterlist lookupTvrtko Ursulin1-8/+13
As the previous patch fixed the places where we walk the whole scatterlist for DMA addresses, this patch fixes the random lookup functionality. To achieve this we have to add a second lookup iterator and add a i915_gem_object_get_sg_dma helper, to be used analoguous to existing i915_gem_object_get_sg_dma. Therefore two lookup caches are maintained per object and they are flushed at the same point for simplicity. (Strictly speaking the DMA cache should be flushed from i915_gem_gtt_finish_pages, but today this conincides with unsetting of the pages in general.) Partial VMA view is then fixed to use the new DMA lookup and properly query sg length. v2: * Checkpatch. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Tom Murphy <murphyt7@tcd.ie> Cc: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201006092508.1064287-2-tvrtko.ursulin@linux.intel.com
2020-09-30drm/i915/gem: Prevent using pgprot_writecombine() if PAT is not supportedChris Wilson1-0/+4
Let's not try and use PAT attributes for I915_MAP_WC if the CPU doesn't support PAT. Fixes: 6056e50033d9 ("drm/i915/gem: Support discontiguous lmem object maps") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.6+ Link: https://patchwork.freedesktop.org/patch/msgid/20200915091417.4086-2-chris@chris-wilson.co.uk (cherry picked from commit 121ba69ffddc60df11da56f6d5b29bdb45c8eb80) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2020-09-30drm/i915/gem: Avoid implicit vmap for highmem on x86-32Chris Wilson1-2/+24
On 32b, highmem using a finite set of indirect PTE (i.e. vmap) to provide virtual mappings of the high pages. As these are finite, map_new_virtual() must wait for some other kmap() to finish when it runs out. If we map a large number of objects, there is no method for it to tell us to release the mappings, and we deadlock. However, if we make an explicit vmap of the page, that uses a larger vmalloc arena, and also has the ability to tell us to release unwanted mappings. Most importantly, it will fail and propagate an error instead of waiting forever. Fixes: fb8621d3bee8 ("drm/i915: Avoid allocating a vmap arena for a single page") #x86-32 References: e87666b52f00 ("drm/i915/shrinker: Hook up vmap allocation failure notifier") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v4.7+ Link: https://patchwork.freedesktop.org/patch/msgid/20200915091417.4086-1-chris@chris-wilson.co.uk (cherry picked from commit 060bb115c2d664f04db9c7613a104dfaef3fdd98) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2020-09-15drm/i915/gem: Prevent using pgprot_writecombine() if PAT is not supportedChris Wilson1-0/+4
Let's not try and use PAT attributes for I915_MAP_WC if the CPU doesn't support PAT. Fixes: 6056e50033d9 ("drm/i915/gem: Support discontiguous lmem object maps") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.6+ Link: https://patchwork.freedesktop.org/patch/msgid/20200915091417.4086-2-chris@chris-wilson.co.uk
2020-09-15drm/i915/gem: Avoid implicit vmap for highmem on x86-32Chris Wilson1-2/+24
On 32b, highmem using a finite set of indirect PTE (i.e. vmap) to provide virtual mappings of the high pages. As these are finite, map_new_virtual() must wait for some other kmap() to finish when it runs out. If we map a large number of objects, there is no method for it to tell us to release the mappings, and we deadlock. However, if we make an explicit vmap of the page, that uses a larger vmalloc arena, and also has the ability to tell us to release unwanted mappings. Most importantly, it will fail and propagate an error instead of waiting forever. Fixes: fb8621d3bee8 ("drm/i915: Avoid allocating a vmap arena for a single page") #x86-32 References: e87666b52f00 ("drm/i915/shrinker: Hook up vmap allocation failure notifier") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v4.7+ Link: https://patchwork.freedesktop.org/patch/msgid/20200915091417.4086-1-chris@chris-wilson.co.uk
2020-09-08Revert "drm/i915: Remove i915_gem_object_get_dirty_page()"Dave Airlie1-0/+14
These commits caused a regression on Lenovo t520 sandybridge machine belonging to reporter. We are reverting them for 5.10 for other reasons, so just do it for 5.9 as well. This reverts commit 763fedd6a216f94c2eb98d2f7ca21be3d3806e69. Reported-by: Harald Arnesen <harald@skogtun.org> Signed-off-by: Dave Airlie <airied@redhat.com>
2020-08-23treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva1-1/+1
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-07-08drm/i915: Remove i915_gem_object_get_dirty_page()Chris Wilson1-14/+0
Last user removed, remove the get_dirty_page convenience function. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200708173748.32734-4-chris@chris-wilson.co.uk
2020-07-08drm/i915: Release shortlived maps of longlived objectsChris Wilson1-0/+15
Some objects we map once during their construction, and then never access their mappings again, even if they are kept around for the duration of the driver. Keeping those pages mapped, often vmapped, is therefore wasteful and we should release the maps as soon as we no longer need them. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200708173748.32734-3-chris@chris-wilson.co.uk
2020-05-11drm/i915/selftests: Always flush before unpining after writingChris Wilson1-0/+1
Be consistent, and even when we know we had used a WC, flush the mapped object after writing into it. The flush understands the mapping type and will only clflush if !I915_MAP_WC, but will always insert a wmb [sfence] so that we can be sure that all writes are visible. v2: Add the unconditional wmb so we are know that we always flush the writes to memory/HW at that point. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200511141304.599-1-chris@chris-wilson.co.uk
2020-04-02drm/i915/gem: Drop cached obj->bind_countChris Wilson1-2/+0
We cached the number of vma bound to the object in order to speed up shrinker decisions. This has been superseded by being more proactive in removing objects we cannot shrink from the shrinker lists, and so we can drop the clumsy attempt at atomically counting the bind count and comparing it to the number of pinned mappings of the object. This will only get more clumsier with asynchronous binding and unbinding. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200401223924.16667-1-chris@chris-wilson.co.uk
2020-01-27drm/i915/gem: manual conversion to struct drm_device logging macros.Wambui Karuga1-1/+3
Convert most of the remaining uses of the printk based logging macros to the new struct drm_device based logging macros in drm/i915/gem. This also involves extracting the struct drm_i915_private device from various types, and using it in the various macros. Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200122125750.9737-3-wambui.karugax@gmail.com