aboutsummaryrefslogtreecommitdiffstats
path: root/include/drm/drm_gem_shmem_helper.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-11-24drm/shmem-helper: Removed drm_gem_shmem_create_object_cached()Thomas Zimmermann1-3/+0
Cached page mappings are now the default for SHMEM GEM objects. Remove the obsolete create function for cached mappings. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Maxime Ripard <mripard@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20201117133156.26822-3-tzimmermann@suse.de
2020-11-24drm/shmem-helper: Use cached mappings by defaultThomas Zimmermann1-2/+2
SHMEM-buffer backing storage is allocated from system memory; which is typically cachable. The default mode for SHMEM objects is writecombine though. Unify SHMEM semantics by defaulting to cached mappings. The exception is pages imported via dma-buf. DMA memory is usually not cached. DRM drivers that require write-combined mappings set the map_wc flag in struct drm_gem_shmem_object to true. This currently affects lima, panfrost and v3d. The drivers mgag200, udl, virtio and vkms continue to use default shmem mappings. The drivers cirrus and gm12u320 change caching flags. Both used writecombine and now switch over to shmem defaults. Both drivers use SHMEM objects as shadow buffers for internal video memory, so cached mappings will not affect them negatively. v3: * set value of shmem pointer before dereferencing it in __drm_gem_shmem_create() (Dan, kernel test robot) v2: * recreate patch on top of latest SHMEM helpers * update lima, panfrost, v3d to select writecombine (Daniel, Rob) Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Maxime Ripard <mripard@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20201117133156.26822-2-tzimmermann@suse.de
2020-11-09drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backendsThomas Zimmermann1-2/+2
This patch replaces the vmap/vunmap's use of raw pointers in GEM object functions with instances of struct dma_buf_map. GEM backends are converted as well. For most of them, this simply changes the returned type. TTM-based drivers now return information about the location of the memory, either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap() et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of implementing their own vmap callbacks. v7: * init QXL cursor to mapped BO buffer (kernel test robot) v5: * update vkms after switch to shmem v4: * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian) * fix a trailing { in drm_gem_vmap() * remove several empty functions instead of converting them (Daniel) * comment uses of raw pointers with a TODO (Daniel) * TODO list: convert more helpers to use struct dma_buf_map Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Christian König <christian.koenig@amd.com> Tested-by: Sam Ravnborg <sam@ravnborg.org> Link: https://patchwork.freedesktop.org/patch/msgid/20201103093015.1063-7-tzimmermann@suse.de
2020-06-10drm/shmem-helper: Add .gem_create_object helper that sets map_cached flagThomas Zimmermann1-0/+4
The helper drm_gem_shmem_create_object_cached() allocates an GEM SHMEM object and sets the map_cached flag. Useful for drivers that want cached mappings. v3: * style fixes Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200609090820.20256-2-tzimmermann@suse.de
2020-02-27drm/shmem: add support for per object caching flags.Gerd Hoffmann1-0/+5
Add map_cached bool to drm_gem_shmem_object, to request cached mappings on a per-object base. Check the flag before adding writecombine to pgprot bits. Cc: stable@vger.kernel.org Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org> Tested-by: Guillaume Gardet <Guillaume.Gardet@arm.com> Link: http://patchwork.freedesktop.org/patch/msgid/20200226154752.24328-2-kraxel@redhat.com
2019-11-14Merge v5.4-rc7 into drm-nextDave Airlie1-0/+13
We have the i915 security fixes to backmerge, but first let's clear the decks for other drivers to avoid a bigger mess. Signed-off-by: Dave Airlie <airlied@redhat.com>
2019-11-06drm/shmem: Add docbook comments for drm_gem_shmem_object madvise fieldsRob Herring1-0/+13
Add missing docbook comments to madvise fields in struct drm_gem_shmem_object which fixes these warnings: include/drm/drm_gem_shmem_helper.h:87: warning: Function parameter or member 'madv' not described in 'drm_gem_shmem_object' include/drm/drm_gem_shmem_helper.h:87: warning: Function parameter or member 'madv_list' not described in 'drm_gem_shmem_object' Fixes: 17acb9f35ed7 ("drm/shmem: Add madvise state and purge helpers") Reported-by: Sean Paul <sean@poorly.run> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Sean Paul <sean@poorly.run> Link: https://patchwork.freedesktop.org/patch/msgid/20191101153754.22803-1-robh@kernel.org
2019-10-17drm/shmem: drop DEFINE_DRM_GEM_SHMEM_FOPSGerd Hoffmann1-26/+0
DEFINE_DRM_GEM_SHMEM_FOPS is identical to DEFINE_DRM_GEM_FOPS now, drop it. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-6-kraxel@redhat.com
2019-10-17drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmapGerd Hoffmann1-4/+2
Switch gem shmem helper to the new mmap() workflow, from &gem_driver.fops.mmap to &drm_gem_object_funcs.mmap. v2: Fix vm_flags and vm_page_prot handling. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Steven Price <steven.price@arm.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20191016115203.20095-3-kraxel@redhat.com
2019-08-28drm/shmem: Use mutex_trylock in drm_gem_shmem_purgeRob Herring1-1/+1
Lockdep reports a circular locking dependency with pages_lock taken in the shrinker callback. The deadlock can't actually happen with current users at least as a BO will never be purgeable when pages_lock is held. To be safe, let's use mutex_trylock() instead and bail if a BO is locked already. WARNING: possible circular locking dependency detected 5.3.0-rc1+ #100 Tainted: G L ------------------------------------------------------ kswapd0/171 is trying to acquire lock: 000000009b9823fd (&shmem->pages_lock){+.+.}, at: drm_gem_shmem_purge+0x20/0x40 but task is already holding lock: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}: fs_reclaim_acquire.part.18+0x34/0x40 fs_reclaim_acquire+0x20/0x28 __kmalloc_node+0x6c/0x4c0 kvmalloc_node+0x38/0xa8 drm_gem_get_pages+0x80/0x1d0 drm_gem_shmem_get_pages+0x58/0xa0 drm_gem_shmem_get_pages_sgt+0x48/0xd0 panfrost_mmu_map+0x38/0xf8 [panfrost] panfrost_gem_open+0xc0/0xe8 [panfrost] drm_gem_handle_create_tail+0xe8/0x198 drm_gem_handle_create+0x3c/0x50 panfrost_gem_create_with_handle+0x70/0xa0 [panfrost] panfrost_ioctl_create_bo+0x48/0x80 [panfrost] drm_ioctl_kernel+0xb8/0x110 drm_ioctl+0x244/0x3f0 do_vfs_ioctl+0xbc/0x910 ksys_ioctl+0x78/0xa8 __arm64_sys_ioctl+0x1c/0x28 el0_svc_common.constprop.0+0x90/0x168 el0_svc_handler+0x28/0x78 el0_svc+0x8/0xc -> #0 (&shmem->pages_lock){+.+.}: __lock_acquire+0xa2c/0x1d70 lock_acquire+0xdc/0x228 __mutex_lock+0x8c/0x800 mutex_lock_nested+0x1c/0x28 drm_gem_shmem_purge+0x20/0x40 panfrost_gem_shrinker_scan+0xc0/0x180 [panfrost] do_shrink_slab+0x208/0x500 shrink_slab+0x10c/0x2c0 shrink_node+0x28c/0x4d8 balance_pgdat+0x2c8/0x570 kswapd+0x22c/0x638 kthread+0x128/0x130 ret_from_fork+0x10/0x18 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&shmem->pages_lock); lock(fs_reclaim); lock(&shmem->pages_lock); *** DEADLOCK *** 3 locks held by kswapd0/171: #0: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40 #1: 00000000ceb37808 (shrinker_rwsem){++++}, at: shrink_slab+0xbc/0x2c0 #2: 00000000f31efa81 (&pfdev->shrinker_lock){+.+.}, at: panfrost_gem_shrinker_scan+0x34/0x180 [panfrost] Fixes: 17acb9f35ed7 ("drm/shmem: Add madvise state and purge helpers") Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <maxime.ripard@bootlin.com> Cc: Sean Paul <sean@poorly.run> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Steven Price <steven.price@arm.com> Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190823021216.5862-6-robh@kernel.org
2019-08-23drm/shmem: Use mutex_trylock in drm_gem_shmem_purgeRob Herring1-1/+1
Lockdep reports a circular locking dependency with pages_lock taken in the shrinker callback. The deadlock can't actually happen with current users at least as a BO will never be purgeable when pages_lock is held. To be safe, let's use mutex_trylock() instead and bail if a BO is locked already. WARNING: possible circular locking dependency detected 5.3.0-rc1+ #100 Tainted: G L ------------------------------------------------------ kswapd0/171 is trying to acquire lock: 000000009b9823fd (&shmem->pages_lock){+.+.}, at: drm_gem_shmem_purge+0x20/0x40 but task is already holding lock: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}: fs_reclaim_acquire.part.18+0x34/0x40 fs_reclaim_acquire+0x20/0x28 __kmalloc_node+0x6c/0x4c0 kvmalloc_node+0x38/0xa8 drm_gem_get_pages+0x80/0x1d0 drm_gem_shmem_get_pages+0x58/0xa0 drm_gem_shmem_get_pages_sgt+0x48/0xd0 panfrost_mmu_map+0x38/0xf8 [panfrost] panfrost_gem_open+0xc0/0xe8 [panfrost] drm_gem_handle_create_tail+0xe8/0x198 drm_gem_handle_create+0x3c/0x50 panfrost_gem_create_with_handle+0x70/0xa0 [panfrost] panfrost_ioctl_create_bo+0x48/0x80 [panfrost] drm_ioctl_kernel+0xb8/0x110 drm_ioctl+0x244/0x3f0 do_vfs_ioctl+0xbc/0x910 ksys_ioctl+0x78/0xa8 __arm64_sys_ioctl+0x1c/0x28 el0_svc_common.constprop.0+0x90/0x168 el0_svc_handler+0x28/0x78 el0_svc+0x8/0xc -> #0 (&shmem->pages_lock){+.+.}: __lock_acquire+0xa2c/0x1d70 lock_acquire+0xdc/0x228 __mutex_lock+0x8c/0x800 mutex_lock_nested+0x1c/0x28 drm_gem_shmem_purge+0x20/0x40 panfrost_gem_shrinker_scan+0xc0/0x180 [panfrost] do_shrink_slab+0x208/0x500 shrink_slab+0x10c/0x2c0 shrink_node+0x28c/0x4d8 balance_pgdat+0x2c8/0x570 kswapd+0x22c/0x638 kthread+0x128/0x130 ret_from_fork+0x10/0x18 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&shmem->pages_lock); lock(fs_reclaim); lock(&shmem->pages_lock); *** DEADLOCK *** 3 locks held by kswapd0/171: #0: 00000000f82369b6 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x0/0x40 #1: 00000000ceb37808 (shrinker_rwsem){++++}, at: shrink_slab+0xbc/0x2c0 #2: 00000000f31efa81 (&pfdev->shrinker_lock){+.+.}, at: panfrost_gem_shrinker_scan+0x34/0x180 [panfrost] Fixes: 17acb9f35ed7 ("drm/shmem: Add madvise state and purge helpers") Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <maxime.ripard@bootlin.com> Cc: Sean Paul <sean@poorly.run> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Steven Price <steven.price@arm.com> Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190823021216.5862-6-robh@kernel.org
2019-08-08drm/shmem: Add madvise state and purge helpersRob Herring1-0/+15
Add support to the shmem GEM helpers for tracking madvise state and purging pages. This is based on the msm implementation. The BO provides a list_head, but the list management is handled outside of the shmem helpers as there are different locking requirements. Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <maxime.ripard@bootlin.com> Cc: Sean Paul <sean@poorly.run> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Eric Anholt <eric@anholt.net> Acked-by: Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190805143358.21245-1-robh@kernel.org
2019-03-14drm: Add library for shmem backed GEM objectsNoralf Trønnes1-0/+159
This adds a library for shmem backed GEM objects. v8: - export drm_gem_shmem_create_with_handle - call mapping_set_gfp_mask to set default zone to GFP_HIGHUSER - Add helper drm_gem_shmem_get_pages_sgt() v7: - Use write-combine for mmap instead. This is the more common case. (robher) v6: - Fix uninitialized variable issue in an error path (anholt). - Add a drm_gem_shmem_vm_open() to the fops to get proper refcounting of the pages (anholt). v5: - Drop drm_gem_shmem_prime_mmap() (Daniel Vetter) - drm_gem_shmem_mmap(): Subtract drm_vma_node_start() to get the real vma->vm_pgoff - drm_gem_shmem_fault(): Use vmf->pgoff now that vma->vm_pgoff is correct v4: - Drop cache modes (Thomas Hellstrom) - Add a GEM attached vtable v3: - Grammar (Sam Ravnborg) - s/drm_gem_shmem_put_pages_unlocked/drm_gem_shmem_put_pages_locked/ (Sam Ravnborg) - Add debug output in error path (Sam Ravnborg) Signed-off-by: Noralf Trønnes <noralf@tronnes.org> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Eric Anholt <eric@anholt.net> Reviewed-by: Eric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/20190313004344.24169-1-robh@kernel.org