aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-05-26drm/amdgpu/gfx: fix typos in commentsJulia Lawall1-2/+2
Spelling mistakes (triple letters) in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-05-26drm/amdgpu: Set CP_HQD_PQ_CONTROL.RPTR_BLOCK_SIZE correctlyHaohui Mai1-1/+1
Remove the accidental shifts on the values of RPTR_BLOCK_SIZE in gfx_v8-v11. The bug essentially always programs the corresponding fields to zero instead of the correct value. The hardware clamps the min value to 5 so this resulted in a value of 5 being programmed. Signed-off-by: Haohui Mai <ricetons@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-05-06drm/amdgpu: nuke dynamic gfx scratch reg allocationChristian König1-20/+4
It's over a decade ago that this was actually used for more than ring and IB tests. Just use the static register directly where needed and nuke the now useless infrastructure. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Lang Yu <Lang.Yu@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-05-04drm/amdgpu: use ring structure to access rptr/wptr v2Jack Xiao1-10/+10
Use ring structure to access the cpu/gpu address of rptr/wptr. v2: merge gfx10/sdma5/sdma5.2 patches Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-04-11drm/amdgpu: Fix incorrect enum typeGrigory Vasilyev1-1/+1
Instead of the 'amdgpu_ring_priority_level' type, the 'amdgpu_gfx_pipe_priority' type was used, which is an error when setting ring priority. This is a minor error, but may cause problems in the future. Instead of AMDGPU_RING_PRIO_2 = 2, we can use AMDGPU_RING_PRIO_MAX = 3, but AMDGPU_RING_PRIO_2 = 2 is used for compatibility with AMDGPU_GFX_PIPE_PRIO_HIGH = 2, and not change the behavior of the code. Signed-off-by: Grigory Vasilyev <h0tc0d3@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-04-08drm/amdgpu: expand cg_flags from u32 to u64Evan Quan1-1/+1
With this, we can support more CG flags. Signed-off-by: Evan Quan <evan.quan@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-11-05drm/amdgpu: correctly toggle gfx on/off around RLC_SPM_* register accessEvan Quan1-0/+4
As part of the ib padding process, accessing the RLC_SPM_* register may trigger gfx hang. Since gfxoff may be already kicked during the whole period. To address that, we manually toggle gfx on/off around the RLC_SPM_* register access. This can resolve the gfx hang issue observed on running Talos with RDP launched in parallel. Signed-off-by: Evan Quan <evan.quan@amd.com> Acked-by: Guchun Chen <guchun.chen@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-08-05drm/amdgpu: Put MODE register in wave debug infoJoseph Greathouse1-0/+1
Add the MODE register into the per-wave debug information. This register holds state such as FP rounding and denorm modes, which exceptions are enabled, and active clamping modes. Signed-off-by: Joseph Greathouse <Joseph.Greathouse@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-04-09drm/amdgpu: add the sched_score to amdgpu_ring_initChristian König1-3/+3
Allow separate ring to share the same scheduler score. No functional change. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-and-Tested-by: Leo Liu <leo.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-03-23drm/amdgpu: Replace in_task() in gfx_v8_0_parse_sq_irq()Sebastian Andrzej Siewior1-4/+5
gfx_v8_0_parse_sq_irq() is using in_task() to distinguish if it is invoked from a workqueue worker or directly from the interrupt handler. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be separated or the context be conveyed in an argument passed by the caller, which usually knows the context. gfx_v8_0_parse_sq_irq() is invoked directly either from a worker or from the interrupt service routine. The worker is only bypassed if the worker is already busy. Add an argument `from_wq' to gfx_v8_0_parse_sq_irq() which is true if invoked from the worker. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-02-09drm/amdgpu: enable wave limit on non high prio cs pipesNirmoy Das1-1/+49
To achieve the best QoS for high priority compute jobs it is required to limit waves on other compute pipes as well. This patch will set min value in non high priority mmSPI_WCL_PIPE_PERCENT_CS[0-3] registers to minimize the impact of normal/low priority compute jobs over high priority compute jobs. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-02-09drm/amdgpu: add wave limit functionality for gfx8,9Nirmoy Das1-1/+17
Wave limiting can be use to load balance high priority compute jobs along with gfx jobs. When enabled, this will reserve ~75% of waves for compute jobs. We do not need this from gfx10 onwards because >=gfx10 has asynchronous compute tunneling to replace wave limit requirement. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-02-09drm/amdgpu: enable only one high prio compute queueNirmoy Das1-4/+2
For high priority compute to work properly we need to enable wave limiting on gfx pipe. Wave limiting is done through writing into mmSPI_WCL_PIPE_PERCENT_GFX register. Enable only one high priority compute queue to avoid race condition between multiple high priority compute queues writing that register simultaneously. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-12-08drm/amdgpu: use AMDGPU_NUM_VMID when possibleNirmoy Das1-1/+1
Replace hardcoded vmid number with AMDGPU_NUM_VMID macro. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-12-01drm/amd/amdgpu/gfx_v8_0: Functions must follow directly after their headersLee Jones1-1/+1
Fixes the following W=1 kernel build warning(s): drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c:3698: warning: Excess function parameter 'adev' description in 'DEFAULT_SH_MEM_BASES' Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: amd-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-11-13drm/amdgpu: fix compute queue priority if num_kcq is less than 4Nirmoy Das1-2/+4
Compute queues are configurable with module param, num_kcq. amdgpu_gfx_is_high_priority_compute_queue was setting 1st 4 queues to high priority queue leaving a null drm scheduler in adev->gpu_sched[hw_ip]["normal_prio"].sched if num_kcq < 5. This patch tries to fix it by alternating compute queue priority between normal and high priority. Fixes: 33abcb1f5a1719b1c (drm/amdgpu: set compute queue priority at mqd_init) Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-11-02drm/amdgpu/gfx: improve code indentation and alignmentDeepak R Varma1-1/+1
General code indentation and alignment changes such as replace spaces by tabs or align function arguments as per the coding style guidelines. Issue reported by checkpatch script. Signed-off-by: Deepak R Varma <mh12gx2825@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-10-27drm/amdgpu: correct CG_ACLK_CNTL settingEvan Quan1-3/+11
Correct polaris CG_ACLK_CNTL setting. Signed-off-by: Evan Quan <evan.quan@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-10-16drm/amdgpu: move amdgpu_num_kcq handling to a helperAlex Deucher1-1/+2
Add a helper so we can set per asic default values. Also, the module parameter is currently clamped to 8, but clamp it per asic just in case some asics have different limits in the future. Enable the option on gfx6,7 as well for consistency. Acked-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-09-08Merge tag 'amd-drm-next-5.10-2020-09-03' of git://people.freedesktop.org/~agd5f/linux into drm-nextDave Airlie1-34/+27
amd-drm-next-5.10-2020-09-03: amdgpu: - RAS fixes - Sienna Cichlid updates - Navy Flounder updates - DCE6 (SI) support in DC - Enable plane rotation - Rework pre-OS vram reservation handling during driver init - Add standard interface to dump GPU metrics table from SMU - Rework tiling and tmz state handling in atomic commits - Pstate fixes - Add voltage and power hwmon interfaces for renoir - SW CTF fixes - S/G display fix for Raven - Print client strings for vmfaults for vega and newer - Manual fan control fixes - Display updates - Reorg power management directory structure - Misc bug fixes - Misc code cleanups amdkfd: - Topology fixes - Add SMI events for thermal throttling and GPU resets radeon: - switch from pci_* to dma_* for dma allocations - PLL fix Scheduler: - Clean up priority levels UAPI: - amdgpu INFO IOCTL query update for TMZ state https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6049 - amdkfd SMI event interface updates https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/therm_thrott From: Alex Deucher <alexdeucher@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200903222921.4152-1-alexander.deucher@amd.com
2020-08-24drm/amdgpu: refine codes to avoid reentering GPU recoveryDennis Li1-3/+3
if other threads have holden the reset lock, recovery will fail to try_lock. Therefore we introduce atomic hive->in_reset and adev->in_gpu_reset, to avoid reentering GPU recovery. v2: drop "? true : false" in the definition of amdgpu_in_reset Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Dennis Li <Dennis.Li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-08-23treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva1-1/+1
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-08-14drm/amd/powerplay: drop unnecessary pp_funcs checkerEvan Quan1-3/+2
It's redundant. Also, the callers should not care about the implementation details. Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Nirmoy Das <nirmoy.das@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-08-14drm/amd/powerplay: optimize amdgpu_dpm_set_clockgating_by_smu() implementationEvan Quan1-14/+7
Cover the implementation details from outside(of power part). Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Nirmoy Das <nirmoy.das@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-08-14drm/amdgpu: revert "fix system hang issue during GPU reset"Christian König1-3/+3
The whole approach wasn't thought through till the end. We already had a reset lock like this in the past and it caused the same problems like this one. Completely revert the patch for now and add individual trylock protection to the hardware access functions as necessary. This reverts commit df9c8d1aa278c435c30a69b8f2418b4a52fcb929. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-08-04drm/amdgpu: introduce a new parameter to configure how many KCQ we want(v5)Monk Liu1-14/+15
what: the MQD's save and restore of KCQ (kernel compute queue) cost lots of clocks during world switch which impacts a lot to multi-VF performance how: introduce a paramter to control the number of KCQ to avoid performance drop if there is no kernel compute queue needed notes: this paramter only affects gfx 8/9/10 v2: refine namings v3: choose queues for each ring to that try best to cross pipes evenly. v4: fix indentation some cleanupsin the gfx_compute_queue_acquire() v5: further fix on indentations more cleanupsin gfx_compute_queue_acquire() TODO: in the future we will let hypervisor driver to set this paramter automatically thus no need for user to configure it through modprobe in virtual machine Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-07-27drm/amdgpu: fix system hang issue during GPU resetDennis Li1-3/+3
when GPU hang, driver has multi-paths to enter amdgpu_device_gpu_recover, the atomic adev->in_gpu_reset and hive->in_reset are used to avoid re-entering GPU recovery. During GPU reset and resume, it is unsafe that other threads access GPU, which maybe cause GPU reset failed. Therefore the new rw_semaphore adev->reset_sem is introduced, which protect GPU from being accessed by external threads during recovery. v2: 1. add rwlock for some ioctls, debugfs and file-close function. 2. change to use dqm->is_resetting and dqm_lock for protection in kfd driver. 3. remove try_lock and change adev->in_gpu_reset as atomic, to avoid re-enter GPU recovery for the same GPU hang. v3: 1. change back to use adev->reset_sem to protect kfd callback functions, because dqm_lock couldn't protect all codes, for example: free_mqd must be called outside of dqm_lock; [ 1230.176199] Hardware name: Supermicro SYS-7049GP-TRT/X11DPG-QT, BIOS 3.1 05/23/2019 [ 1230.177221] Call Trace: [ 1230.178249] dump_stack+0x98/0xd5 [ 1230.179443] amdgpu_virt_kiq_reg_write_reg_wait+0x181/0x190 [amdgpu] [ 1230.180673] gmc_v9_0_flush_gpu_tlb+0xcc/0x310 [amdgpu] [ 1230.181882] amdgpu_gart_unbind+0xa9/0xe0 [amdgpu] [ 1230.183098] amdgpu_ttm_backend_unbind+0x46/0x180 [amdgpu] [ 1230.184239] ? ttm_bo_put+0x171/0x5f0 [ttm] [ 1230.185394] ttm_tt_unbind+0x21/0x40 [ttm] [ 1230.186558] ttm_tt_destroy.part.12+0x12/0x60 [ttm] [ 1230.187707] ttm_tt_destroy+0x13/0x20 [ttm] [ 1230.188832] ttm_bo_cleanup_memtype_use+0x36/0x80 [ttm] [ 1230.189979] ttm_bo_put+0x1be/0x5f0 [ttm] [ 1230.191230] amdgpu_bo_unref+0x1e/0x30 [amdgpu] [ 1230.192522] amdgpu_amdkfd_free_gtt_mem+0xaf/0x140 [amdgpu] [ 1230.193833] free_mqd+0x25/0x40 [amdgpu] [ 1230.195143] destroy_queue_cpsch+0x1a7/0x270 [amdgpu] [ 1230.196475] pqm_destroy_queue+0x105/0x260 [amdgpu] [ 1230.197819] kfd_ioctl_destroy_queue+0x37/0x70 [amdgpu] [ 1230.199154] kfd_ioctl+0x277/0x500 [amdgpu] [ 1230.200458] ? kfd_ioctl_get_clock_counters+0x60/0x60 [amdgpu] [ 1230.201656] ? tomoyo_file_ioctl+0x19/0x20 [ 1230.202831] ksys_ioctl+0x98/0xb0 [ 1230.204004] __x64_sys_ioctl+0x1a/0x20 [ 1230.205174] do_syscall_64+0x5f/0x250 [ 1230.206339] entry_SYSCALL_64_after_hwframe+0x49/0xbe 2. remove try_lock and introduce atomic hive->in_reset, to avoid re-enter GPU recovery. v4: 1. remove an unnecessary whitespace change in kfd_chardev.c 2. remove comment codes in amdgpu_device.c 3. add more detailed comment in commit message 4. define a wrap function amdgpu_in_reset v5: 1. Fix some style issues. Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Suggested-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Suggested-by: Christian König <christian.koenig@amd.com> Suggested-by: Felix Kuehling <Felix.Kuehling@amd.com> Suggested-by: Lijo Lazar <Lijo.Lazar@amd.com> Suggested-by: Luben Tukov <luben.tuikov@amd.com> Signed-off-by: Dennis Li <Dennis.Li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-07-02drm/amdgpu: Clean up KFD VMID assignmentFelix Kuehling1-4/+2
The KFD VMID assignment was hard-coded in a few places. Consolidate that in a single variable adev->vm_manager.first_kfd_vmid. The value is still assigned in gmc-ip-version-specific code. Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-07-01drm/amdgpu: label internally used symbols as staticNirmoy Das1-1/+1
Used sparse(make C=1) to find these loose ends. v2: removed unwanted extra line Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-05-18drm/amdgpu: apply AMDGPU_IB_FLAG_EMIT_MEM_SYNC to compute IBs too (v3)Marek Olšák1-1/+18
Compute IBs need this too. v2: split out version bump v3: squash in emit frame count fixes Signed-off-by: Marek Olšák <marek.olsak@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-05-18drm/amdgpu: Add mem_sync implementation for all the ASICs.Andrey Grodzovsky1-1/+16
Implement the .mem_sync hook defined earlier. v2: Rename functions Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-04-23drm/amdgpu: request reg_val_offs each kiq read regYintian Tao1-4/+4
According to the current kiq read register method, there will be race condition when using KIQ to read register if multiple clients want to read at same time just like the expample below: 1. client-A start to read REG-0 throguh KIQ 2. client-A poll the seqno-0 3. client-B start to read REG-1 through KIQ 4. client-B poll the seqno-1 5. the kiq complete these two read operation 6. client-A to read the register at the wb buffer and get REG-1 value Therefore, use amdgpu_device_wb_get() to request reg_val_offs for each kiq read register. v2: fix the error remove v3: fix the print typo v4: remove unused variables Signed-off-by: Yintian Tao <yttao@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-04-22drm/amdgpu: change how we update mmRLC_SPM_MC_CNTLChristian König1-2/+8
In pp_one_vf mode avoid the extra overhead and read/write the registers without the KIQ. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Monk Liu <monk.liu@amd.com> Acked-by: Yintian Tao <yintian.tao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-04-09drm/amdgpu: rework sched_list generationNirmoy Das1-5/+6
Generate HW IP's sched_list in amdgpu_ring_init() instead of amdgpu_ctx.c. This makes amdgpu_ctx_init_compute_sched(), ring.has_high_prio and amdgpu_ctx_init_sched() unnecessary. This patch also stores sched_list for all HW IPs in one big array in struct amdgpu_device which makes amdgpu_ctx_init_entity() much more leaner. v2: fix a coding style issue do not use drm hw_ip const to populate amdgpu_ring_type enum v3: remove ctx reference and move sched array and num_sched to a struct use num_scheds to detect uninitialized scheduler list v4: use array_index_nospec for user space controlled variables fix possible checkpatch.pl warnings Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-04-01drm/amdgpu: stop disable the scheduler during HW finiChristian König1-7/+0
When we stop the HW for example for GPU reset we should not stop the front-end scheduler. Otherwise we run into intermediate failures during command submission. The scheduler should only be stopped in very few cases: 1. We can't get the hardware working in ring or IB test after a GPU reset. 2. The KIQ scheduler is not used in the front-end and should be disabled during GPU reset. 3. In amdgpu_ring_fini() when the driver unloads. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Nirmoy Das <nirmoy.das@amd.com> Test-by: Dennis Li <dennis.li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-04-01drm/amdgpu: implement more ib pools (v2)xinhui pan1-2/+4
We have three ib pools, they are normal, VM, direct pools. Any jobs which schedule IBs without dependence on gpu scheduler should use DIRECT pool. Any jobs schedule direct VM update IBs should use VM pool. Any other jobs use NORMAL pool. v2: squash in coding style fix Signed-off-by: xinhui pan <xinhui.pan@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-09drm/amdgpu: remove unused functionsNirmoy Das1-99/+0
AMDGPU statically sets priority for compute queues at initialization so remove all the functions responsible for changing compute queue priority dynamically. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-09drm/amdgpu: set compute queue priority at mqd_initNirmoy Das1-3/+20
We were changing compute ring priority while rings were being used before every job submission which is not recommended. This patch sets compute queue priority at mqd initialization for gfx8, gfx9 and gfx10. Policy: make queue 0 of each pipe as high priority compute queue High/normal priority compute sched lists are generated from set of high/normal priority compute queues. At context creation, entity of compute queue get a sched list from high or normal priority depending on ctx->priority Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-05drm/amdgpu: fix IB test MCBP bugMonk Liu1-1/+1
1)for gfx IB test we shouldn't insert DE meta data 2)we should make sure IB test finished before we send event 3 to hypervisor otherwise the IDLE from event 3 will preempt IB test, which is not designed as a compatible structure for MCBP Signed-off-by: Monk Liu <Monk.Liu@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-02-28drm/amdgpu: Initialize SPM_VMID with 0xf (v2)Jacob He1-1/+18
SPM_VMID is a global resource, SPM access the video memory according to SPM_VMID. The initial valude of SPM_VMID is 0 which is used by kernel. That means UMD can overwrite the memory of VMID0 by enabling SPM, that is really dangerous. Initialize SPM_VMID with 0xf, it messes up other user mode process at most. v2: squash in indentation fix Signed-off-by: Jacob He <jacob.he@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-02-28drm/amdgpu: Add num_banks and num_ranks to gfx config structureYong Zhao1-0/+5
The two members will be used by KFD later. Signed-off-by: Yong Zhao <Yong.Zhao@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-01-22drm/amdgpu: provide a generic function interface for reading/writing register by KIQchen gong1-2/+3
Move amdgpu_virt_kiq_rreg/amdgpu_virt_kiq_wreg function to amdgpu_gfx.c, and rename them to amdgpu_kiq_rreg/amdgpu_kiq_wreg.Make it generic and flexible. Signed-off-by: chen gong <curry.gong@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-01-16drm/amdgpu: only set cp active field for kiq queueHuang Rui1-2/+5
The mec ucode will set the CP_HQD_ACTIVE bit while the queue is mapped by MAP_QUEUES packet. So we only need set cp active field for kiq queue. Signed-off-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-12-05drm/amdgpu: add cache flush workaround to gfx8 emit_fencePierre-Eric Pelloux-Prayer1-3/+19
The same workaround is used for gfx7. Both PAL and Mesa use it for gfx8 too, so port this commit to gfx_v8_0_ring_emit_fence_gfx. Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-12-02drm/amdgpu: fix calltrace during kmd unload(v3)Monk Liu1-39/+1
issue: kernel would report a warning from a double unpin during the driver unloading on the CSB bo why: we unpin it during hw_fini, and there will be another unpin in sw_fini on CSB bo. fix: actually we don't need to pin/unpin it during hw_init/fini since it is created with kernel pinned, we only need to fullfill the CSB again during hw_init to prevent CSB/VRAM lost after S3 v2: get_csb in init_rlc so hw_init() will make CSIB content back even after reset or s3 v3: use bo_create_kernel instead of bo_create_reserved for CSB otherwise the bo_free_kernel() on CSB is not aligned and would lead to its internal reserve pending there forever take care of gfx7/8 as well Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: Xiaojie Yuan <xiaojie.yuan@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-11-13drm/amdgpu: remove set but not used variable 'mc_shared_chmap'yu kuai1-2/+1
Fixes gcc '-Wunused-but-set-variable' warning: drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c: In function ‘gfx_v8_0_gpu_early_init’: drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c:1713:6: warning: variable ‘mc_shared_chmap’ set but not used [-Wunused-but-set-variable] Fixes: 0bde3a95eaa9 ("drm/amdgpu: split gfx8 gpu init into sw and hw parts") Signed-off-by: yu kuai <yukuai3@huawei.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-10-25drm/amdgpu: remove unused parameter in amdgpu_gfx_kiq_free_ringNirmoy Das1-1/+1
Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-08-09Merge tag 'v5.3-rc3' into drm-next-5.4Alex Deucher1-0/+9
Linux 5.3-rc3 Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-08-06drm/amdgpu: pin the csb buffer on hw init for gfx v8Likun Gao1-0/+40
Without this pin, the csb buffer will be filled with inconsistent data after S3 resume. And that will causes gfx hang on gfxoff exit since this csb will be executed then. Signed-off-by: Likun Gao <Likun.Gao@amd.com> Tested-by: Paul Gover <pmw.gover@yahoo.co.uk> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Reviewed-by: Xiaojie Yuan <xiaojie.yuan@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-07-30drm/amdgpu: Default disable GDS for compute+gfxJoseph Greathouse1-7/+17
Units in the GDS block default to allowing all VMIDs access to all entries. Disable shader access to the GDS, GWS, and OA blocks from all compute and gfx VMIDs by default. For compute, HWS firmware will set up the access bits for the appropriate VMID when a compute queue requires access to these blocks. The driver will handle enabling access on-demand for graphics VMIDs. Leaving VMID0 with full access because otherwise HWS cannot save or restore values during task switch. v2: Fixed code and comment styling. Signed-off-by: Joseph Greathouse <Joseph.Greathouse@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>