aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/msm/msm_ringbuffer.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-30drm/msm/gem: Unpin objects slightly laterRob Clark1-1/+2
The introduction of "drm/msm/gem: Evict active GEM objects when necessary" exposes a problem with "drm/msm/gem: Unpin buffers earlier", in that we need to keep the object pinned in the time the submit is queued up in the gpu scheduler. Otherwise the shrinker will see it as a thing that can be evicted if we wait for it to be signaled. But if the shrinker path is waiting on it with the obj lock held, the job cannot be scheduled, as that also requires briefly grabbing the obj lock, leading to deadlock. (Not to mention, we don't want the shrinker to evict an obj queued up in gpu scheduler.) Fixes: f371bcc0c2ac ("drm/msm/gem: Unpin buffers earlier") Fixes: 025d27239a2f ("drm/msm/gem: Evict active GEM objects when necessary") Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/19 Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Chia-I Wu <olvaffe@gmail.com> Patchwork: https://patchwork.freedesktop.org/patch/504528/ Link: https://lore.kernel.org/r/20220923224043.2449152-1-robdclark@gmail.com
2022-08-28drm/msm: Remove unnecessary pm_runtime_get/putAkhil P Oommen1-4/+0
We already enable gpu power from msm_gpu_submit(), so avoid a duplicate pm_runtime_get/put from msm_job_run(). Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> Patchwork: https://patchwork.freedesktop.org/patch/498390/ Link: https://lore.kernel.org/r/20220819015030.v5.1.Icf1e8f0c9b3e7e9933c3b48c70477d0582f3243f@changeid Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-06-15drm/msm/gem: Separate object and vma unpinRob Clark1-1/+1
Previously the BO_PINNED state in the submit was tracking two related but different things: (1) that the buffer object was pinned, and (2) that the vma (mapping within a set of pagetables) was pinned. But with fenced vma unpin (needed so that userspace couldn't race with retire path for releasing a vma) these two were decoupled. The fact that the BO_PINNED flag was already cleared meant that we leaked the bo pin count which should have been dropped when the submit was retired. So split this state into BO_OBJ_PINNED and BO_VMA_PINNED, so they can be dropped independently. Fixes: 95d1deb02a9c ("drm/msm/gem: Add fenced vma unpin") Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/487559/ Link: https://lore.kernel.org/r/20220527172341.2151005-1-robdclark@gmail.com
2022-04-27drm/msm: change msm_sched_ops from global to staticTom Rix1-1/+1
Smatch reports this issue msm_ringbuffer.c:43:36: warning: symbol 'msm_sched_ops' was not declared. Should it be static? msm_sched_ops is only used in msm_ringbuffer.c so change its storage-class specifier to static. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Patchwork: https://patchwork.freedesktop.org/patch/482883/ Link: https://lore.kernel.org/r/20220421131507.1557667-1-trix@redhat.com Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2022-04-21drm/msm/gem: Add fenced vma unpinRob Clark1-1/+12
With userspace allocated iova (next patch), we can have a race condition where userspace observes the fence completion and deletes the vma before retire_submit() gets around to unpinning the vma. To handle this, add a fenced unpin which drops the refcount but tracks the fence, and update msm_gem_vma_inuse() to check any previously unsignaled fences. v2: Fix inuse underflow (duplicate unpin) v3: Fix msm_job_run() vs submit_cleanup() race condition Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20220411215849.297838-10-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-02-23drm/sched: Add device pointer to drm_gpu_schedulerJiawei Gu1-1/+1
Add device pointer so scheduler's printing can use DRM_DEV_ERROR() instead, which makes life easier under multiple GPU scenario. v2: amend all calls of drm_sched_init() v3: fill dev pointer for all drm_sched_init() calls Signed-off-by: Jiawei Gu <Jiawei.Gu@amd.com> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220221095705.5290-1-Jiawei.Gu@amd.com
2021-11-28drm/msm: Remove struct_mutex usageRob Clark1-2/+2
The remaining struct_mutex usage is just to serialize various gpu related things (submit/retire/recover/fault/etc), so replace struct_mutex with gpu->lock. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20211109181117.591148-4-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-08-30drm/msm: Use scheduler dependency handlingDaniel Vetter1-12/+0
drm_sched_job_init is already at the right place, so this boils down to deleting code. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: Rob Clark <robdclark@gmail.com> Cc: Sean Paul <sean@poorly.run> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20210805104705.862416-13-daniel.vetter@ffwll.ch
2021-07-30Merge tag 'drm-msm-next-2021-07-28' of https://gitlab.freedesktop.org/drm/msm into drm-nextDave Airlie1-3/+66
An early pull for v5.15 (there'll be more coming in a week or two), consisting of the drm/scheduler conversion and a couple other small series that one was based one. Mostly sending this now because IIUC danvet wanted it in drm-next so he could rebase on it. (Daniel, if you disagree then speak up, and I'll instead include this in the main pull request once that is ready.) This also has a core patch to drop drm_gem_object_put_locked() now that the last use of it is removed. [airlied: add NULL to drm_sched_init] Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rob Clark <robdclark@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGumRk7H88bqV=H9Fb1SM0zPBo5B7NsCU3jFFKBYxf5k+Q@mail.gmail.com
2021-07-28drm/msm: Conversion to drm schedulerRob Clark1-0/+63
For existing adrenos, there is one or more ringbuffer, depending on whether preemption is supported. When preemption is supported, each ringbuffer has it's own priority. A submitqueue (which maps to a gl context or vk queue in userspace) is mapped to a specific ring- buffer at creation time, based on the submitqueue's priority. Each ringbuffer has it's own drm_gpu_scheduler. Each submitqueue maps to a drm_sched_entity. And each submit maps to a drm_sched_job. Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4 Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: drop drm_gem_object_put_locked()Rob Clark1-1/+1
No idea why we were still using this. It certainly hasn't been needed for some time. So drop the pointless twin codepaths. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-4-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: Docs and misc cleanupRob Clark1-1/+1
Fix a couple incorrect or misspelt comments, and add submitqueue doc comment. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: Let fences read directly from memptrsRob Clark1-1/+1
Let dma_fence::signaled, etc, read directly from the address that the hw is writing with updated completed fence seqno, so we can potentially notice that the fence is signaled sooner. Plus add some docs. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210726144359.2179302-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Protect ring->submits with it's own lockRob Clark1-0/+1
One less place to rely on dev->struct_mutex. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Document and rename preempt_lockRob Clark1-1/+1
Before adding another lock, give ring->lock a more descriptive name. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-09-04drm/msm: Enable expanded apriv support for a650Jordan Crouse1-2/+2
a650 supports expanded apriv support that allows us to map critical buffers (ringbuffer and memstore) as as privileged to protect them from corruption. Cc: stable@vger.kernel.org Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-08-17drm/msm/gpu: make ringbuffer readonlyRob Clark1-1/+2
The GPU has no business writing into the ringbuffer, let's make it readonly to the GPU. Fixes: 7198e6b03155 ("drm/msm: add a3xx gpu support") Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-11drm/msm/gpu: Map the ringbuffer in the iova at create timeJordan Crouse1-2/+2
For reasons that I'm sure made perfect sense at the time we were opting to defer the iova alloc / pin on the ringbuffer until HW init time so when we moved to iova reference counting we ended up adding a reference count every time the hardware started. Not that it mattered (because the ring is always around) but it did make the debug output look odd. Allocate and pin the iova at create time instead. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-12-11drm/msm: Add a name field for gem objectsJordan Crouse1-0/+3
For debugging purposes it is useful to assign descriptions to buffers so that we know what they are used for. Add a field to the buffer object and use that to name the various kernel side allocations which ends up looking like like this in /d/dri/X/gem: flags id ref offset kaddr size madv name 00040000: I 0 ( 1) 00000000 0000000070b79eca 00004096 memptrs vmas: [gpu: 01000000,mapped,inuse=1] 00020000: I 0 ( 1) 00000000 0000000031ed4074 00032768 ring0 Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-12-11drm/msm: Add a common function to free kernel buffer objectsJordan Crouse1-5/+2
Buffer objects allocated with msm_gem_kernel_new() are mostly freed the same way so we can save a few lines of code with a common function. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2018-02-20drm/msm: Replace gem_object deprecated functionsSteve Kowalik1-1/+1
drm_gem_object_{reference,unreference,unreference_unlocked} are deprecated functions, and merely alias to the get/put functions. Switch to the new names. Signed-off-by: Steve Kowalik <steven@wedontsleep.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Implement preemption for A5XX targetsJordan Crouse1-0/+1
Implement preemption for A5XX targets - this allows multiple ringbuffers for different priorities with automatic preemption of a lower priority ringbuffer if a higher one is ready. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Shadow current pointer in the ring until command is completeJordan Crouse1-0/+1
Add a shadow pointer to track the current command being written into the ring. Don't commit it as 'cur' until the command is submitted. Because 'cur' is used to construct the software copy of the wptr this ensures that somebody peeking in on the ring doesn't assume that a command is inflight while it is being written. This isn't a huge deal with a single ring (though technically the hangcheck could assume the system is prematurely busy when it isn't) but it will be rather important for preemption where the decision to preempt is based on a non-empty ringbuffer. Without a shadow an aggressive preemption scheme could assume that the ringbuffer is non empty and switch to it before the CPU is done writing the command and boom. Even though preemption won't be supported for all targets because of the way the code is organized it is simpler to make this generic for all targets. The extra load for non-preemption targets should be minimal. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-10-28drm/msm: Support multiple ringbuffersJordan Crouse1-10/+24
Add the infrastructure to support the idea of multiple ringbuffers. Assign each ringbuffer an id and use that as an index for the various ring specific operations. The biggest delta is to support legacy fences. Each fence gets its own sequence number but the legacy functions expect to use a unique integer. To handle this we return a unique identifier for each submission but map it to a specific ring/sequence under the covers. Newer users use a dma_fence pointer anyway so they don't care about the actual sequence ID or ring. The actual mechanics for multiple ringbuffers are very target specific so this code just allows for the possibility but still only defines one ringbuffer for each target family. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-08-22drm/msm: Add a helper function for in-kernel buffer allocationsJordan Crouse1-7/+5
Nearly all of the buffer allocations for kernel allocate an buffer object, virtual address and GPU iova at the same time. Make a helper function to handle the details. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> [dropped msm_fbdev conversion to new helper, since it interferes with display-handover work, where we want to separate allocation and mapping] Signed-off-by: Rob Clark <robdclark@gmail.com>
2017-06-17drm/msm: Separate locking of buffer resources from struct_mutexSushmita Susheelendra1-1/+1
Buffer object specific resources like pages, domains, sg list need not be protected with struct_mutex. They can be protected with a buffer object level lock. This simplifies locking and makes it easier to avoid potential recursive locking scenarios for SVM involving mmap_sem and struct_mutex. This also removes unnecessary serialization when creating buffer objects, and also between buffer object creation and GPU command submission. Signed-off-by: Sushmita Susheelendra <ssusheel@codeaurora.org> [robclark: squash in handling new locking for shrinker] Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-12-29drm/msm: Ensure that the hardware write pointer is validJordan Crouse1-1/+2
Currently the value written to CP_RB_WPTR is calculated on the fly as (rb->next - rb->start). But as the code is designed rb->next is wrapped before writing the commands so if a series of commands happened to fit perfectly in the ringbuffer, rb->next would end up being equal to rb->size / 4 and thus result in an out of bounds address to CP_RB_WPTR. The easiest way to fix this is to mask WPTR when writing it to the hardware; it makes the hardware happy and the rest of the ringbuffer math appears to work and there isn't any point in upsetting anything. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> [squash in is_power_of_2() check] Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-07-16drm/msm: change gem->vmap() to get/putRob Clark1-2/+4
Before we can add vmap shrinking, we really need to know which vmap'ings are currently being used. So switch to get/put interface. Stubbed put fxns for now. Signed-off-by: Rob Clark <robdclark@gmail.com>
2016-06-04drm/msm: deal with exhausted vmap space betterRob Clark1-0/+4
Some, but not all, callers of obj->vmap() would check if return IS_ERR(). So let's actually return an error if vmap() fails. And fixup the call-sites that were not handling this properly. Signed-off-by: Rob Clark <robdclark@gmail.com>
2015-05-15drm/msm: fix locking inconsistencies in gpu->destroy()Rob Clark1-1/+1
In error paths, this was being called without struct_mutex held. Leading to panics like: msm 1a00000.qcom,mdss_mdp: No memory protection without IOMMU Kernel panic - not syncing: BUG! CPU: 0 PID: 1409 Comm: cat Not tainted 4.0.0-dirty #4 Hardware name: Qualcomm Technologies, Inc. APQ 8016 SBC (DT) Call trace: [<ffffffc000089c78>] dump_backtrace+0x0/0x118 [<ffffffc000089da0>] show_stack+0x10/0x20 [<ffffffc0006686d4>] dump_stack+0x84/0xc4 [<ffffffc0006678b4>] panic+0xd0/0x210 [<ffffffc0003e1ce4>] drm_gem_object_free+0x5c/0x60 [<ffffffc000402870>] adreno_gpu_cleanup+0x60/0x80 [<ffffffc0004035a0>] a3xx_destroy+0x20/0x70 [<ffffffc0004036f4>] a3xx_gpu_init+0x84/0x108 [<ffffffc0004018b8>] adreno_load_gpu+0x58/0x190 [<ffffffc000419dac>] msm_open+0x74/0x88 [<ffffffc0003e0a48>] drm_open+0x168/0x400 [<ffffffc0003e7210>] drm_stub_open+0xa8/0x118 [<ffffffc0001a0e84>] chrdev_open+0x94/0x198 [<ffffffc000199f88>] do_dentry_open+0x208/0x310 [<ffffffc00019a4c4>] vfs_open+0x44/0x50 [<ffffffc0001aa26c>] do_last.isra.14+0x2c4/0xc10 [<ffffffc0001aac38>] path_openat+0x80/0x5e8 [<ffffffc0001ac354>] do_filp_open+0x2c/0x98 [<ffffffc00019b60c>] do_sys_open+0x13c/0x228 [<ffffffc00019b72c>] SyS_openat+0xc/0x18 CPU1: stopping But there isn't any particularly good reason to hold struct_mutex for teardown, so just standardize on calling it without the mutex held and use the _unlocked() versions for GEM obj unref'ing Signed-off-by: Rob Clark <robdclark@gmail.com>
2013-08-24drm/msm: add a3xx gpu supportRob Clark1-0/+61
Add initial support for a3xx 3d core. So far, with hardware that I've seen to date, we can have: + zero, one, or two z180 2d cores + a3xx or a2xx 3d core, which share a common CP (the firmware for the CP seems to implement some different PM4 packet types but the basics of cmdstream submission are the same) Which means that the eventual complete "class" hierarchy, once support for all past and present hw is in place, becomes: + msm_gpu + adreno_gpu + a3xx_gpu + a2xx_gpu + z180_gpu This commit splits out the parts that will eventually be common between a2xx/a3xx into adreno_gpu, and the parts that are even common to z180 into msm_gpu. Note that there is no cmdstream validation required. All memory access from the GPU is via IOMMU/MMU. So as long as you don't map silly things to the GPU, there isn't much damage that the GPU can do. Signed-off-by: Rob Clark <robdclark@gmail.com>