aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-09-16drm/amdgpu: add graceful VM fault handling v3Christian König1-0/+2
Next step towards HMM support. For now just silence the retry fault and optionally redirect the request to the dummy page. v2: make sure the VM is not destroyed while we handle the fault. v3: fix VM destroy check, cleanup comments Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-09-16drm/amdgpu: allow direct submission of PDE updates v2Christian König1-2/+2
For handling PDE updates directly in the fault handler. v2: fix typo in comment Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-09-16drm/amdgpu: allow direct submission in the VM backends v2Christian König1-0/+5
This allows us to update page tables directly while in a page fault. v2: use direct/delayed entities and still wait for moves Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-09-16drm/amdgpu: split the VM entity into direct and delayedChristian König1-2/+3
For page fault handling we need to use a direct update which can't be blocked by ongoing user CS. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-09-13drm/amdgpu: reserve at least 4MB of VRAM for page tables v2Christian König1-0/+3
This hopefully helps reduce the contention for page tables. v2: adjust maximum reported VRAM size as well Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-07-30drm/amdgpu/gmc10: fix pte mytpe field error for navi14tiancyin1-1/+1
navi14 share same PTE format with navi10. Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: tiancyin <tianci.yin@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-07-18drm/amdgpu: add one more mmhub instance for Arcturus (v2)Le Ma1-1/+2
v2: set mmhub num under CHIP_ARCTURUS switch case and add one more mmhub id_mgr Signed-off-by: Le Ma <le.ma@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-07-18drm/amdgpu: rename AMDGPU_GFXHUB/MMHUB macro with hub numberLe Ma1-2/+2
The number of GFXHUB/MMHUB may be expanded in later ASICs. Signed-off-by: Le Ma <le.ma@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-20drm/amdgpu: refine the PTE encoding of PRT for navi10Jack Xiao1-0/+2
Due to GCR change from navi10, the PTE encoding of PRT needs change VSCTL = 01111 (was 0XX1X). Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-20drm/amd/gmc9: rename AMDGPU_PTE_MTYPE to AMDGPU_PTE_MTYPE_VG10Hawking Zhang1-3/+3
To differentiate the mtypes across asics. Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-20drm/amdgpu: correct pte mtype field for naviHawking Zhang1-1/+5
The MTYPE filed moves from bits 58:57 to 50:48 for NV10 And the size of MTYPE field is now 3bits Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-04-03drm/amdgpu: provide the page fault queue to the VM codeChristian König1-0/+1
We are going to need that for recoverable page faults. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-27drm/amdgpu: drop the ib from the VM update parametersChristian König1-5/+0
It is redundant with the job pointer. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-27drm/amdgpu: move VM table mapping into the backend as wellChristian König1-1/+1
Clean that up further and also fix another case where the BO wasn't kmapped for CPU based updates. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-27drm/amdgpu: XGMI pstate switch initial supportshaoyunl1-0/+4
Driver vote low to high pstate switch whenever there is an outstanding XGMI mapping request. Driver vote high to low pstate when all the outstanding XGMI mapping is terminated. Signed-off-by: shaoyunl <shaoyun.liu@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-21drm/amdgpu: use the new VM backend for PTEsChristian König1-13/+0
And remove the existing code when it is unused. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-21drm/amdgpu: new VM update backendsChristian König1-1/+29
Separate out all functions for SDMA and CPU based page table updates into separate backends. This way we can keep most of the complexity of those from the core VM code. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-21drm/amdgpu: move and rename amdgpu_pte_update_paramsChristian König1-0/+45
Move the update parameter into the VM header and rename them. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-21drm/amdgpu: remove some unused VM definesChristian König1-5/+0
Not needed any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-19drm/amdgpu: wait for VM to become idle during flushChristian König1-0/+2
Make sure that not only the entities are flush, but that we also wait for the HW to finish all processing. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-19drm/amdgpu: remove chashChristian König1-14/+0
Remove the chash implementation for now since it isn't used any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-19drm/amdgpu: drop the huge page flagChristian König1-1/+0
Not needed any more since we now free PDs/PTs on demand. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-03-19drm/amdgpu: allocate VM PDs/PTs on demandChristian König1-3/+0
Let's start to allocate VM PDs/PTs on demand instead of pre-allocating them during mapping. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-01-25drm/amdgpu: set bulk_moveable to false when lru changed v2Chunming Zhou1-0/+2
if lru is changed, we cannot do bulk moving. v2: root bo isn't in bulk moving, skip its change. Signed-off-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-12-07drm/amdgpu: remove VM fault_credit handlingChristian König1-5/+0
printk_ratelimit() is much better suited to limit the number of reported VM faults. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-13drm/amdgpu: use a single linked list for amdgpu_vm_bo_baseChristian König1-1/+1
Instead of the double linked list. Gets the size of amdgpu_vm_pt down to 64 bytes again. We could even reduce it down to 32 bytes, but that would require some rather extreme hacks. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-12drm/amdgpu: Move fault hash table to amdgpu vmOak Zeng1-0/+13
In stead of share one fault hash table per device, make it per vm. This can avoid inter-process lock issue when fault hash table is full. Change-Id: I5d1281b7c41eddc8e26113e010516557588d3708 Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Suggested-by: Christian Konig <Christian.Koenig@amd.com> Suggested-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian Konig <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-10drm/amdgpu: correctly sign extend 48bit addresses v3Christian König1-13/+0
Correct sign extend the GMC addresses to 48bit. v2: sign extending turned out easier than thought. v3: clean up the defines and move them into amdgpu_gmc.h as well Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-10drm/amdgpu: separate per VM BOs from normal in the moved stateChristian König1-2/+5
Allows us to avoid taking the spinlock in more places. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-29drm/amdkfd: Release an acquired process vmOak Zeng1-0/+1
For compute vm acquired from amdgpu, vm.pasid is managed by kfd. Decouple pasid from such vm on process destroy to avoid duplicate pasid release. Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-29drm/amdgpu: Set pasid for compute vm (v2)Oak Zeng1-1/+1
To make a amdgpu vm to a compute vm, the old pasid will be freed and replaced with a pasid managed by kfd. Kfd can't reuse original pasid allocated by amdgpu because kfd uses different pasid policy with amdgpu. For example, all graphic devices share one same pasid in a process. v2: rebase (Alex) Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: remove extra root PD alignmentChristian König1-3/+0
Just another leftover from radeon. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: Adjust the VM size based on system memory size v2Felix Kuehling1-1/+1
Set the VM size based on system memory size between the ASIC-specific limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their default VM size of 256TB (48 bit). Only older GPUs will adjust VM size depending on system memory size. This makes more VM space available for ROCm applications on GFXv8 GPUs that want to map all available VRAM and system memory in their SVM address space. v2: * Clarify comment * Round up memory size before >> 30 * Round up automatic vm_size to power of two Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: use bulk moves for efficient VM LRU handling (v6)Huang Rui1-1/+10
I continue to work for bulk moving that based on the proposal by Christian. Background: amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of them on the end of LRU list one by one. Thus, that cause so many BOs moved to the end of the LRU, and impact performance seriously. Then Christian provided a workaround to not move PD/PT BOs on LRU with below patch: Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid validating VM PTs") However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU instead of one by one. Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be validated we move all BOs together to the end of the LRU without dropping the lock for the LRU. While doing so we note the beginning and end of this block in the LRU list. Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do, we don't move every BO one by one, but instead cut the LRU list into pieces so that we bulk move everything to the end in just one operation. Test data: +--------------+-----------------+-----------+---------------------------------------+ | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) | | |Principle(Vulkan)| | | +------------------------------------------------------------------------------------+ | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) | | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) | +------------------------------------------------------------------------------------+ | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) | |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)| |PT BOs on LRU)| | | | +------------------------------------------------------------------------------------+ | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) | | | | |0.214 ms(8K) 0.225 ms(16K) | +--------------+-----------------+-----------+---------------------------------------+ After test them with above three benchmarks include vulkan and opencl. We can see the visible improvement than original, and even better than original with workaround. v2: move all BOs include idle, relocated, and moved list to the end of LRU and put them together. v3: remove unused parameter and use list_for_each_entry instead of the one with save entry. v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time, all bo will be back on idle list. v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of validated, and move ttm_bo_bulk_move_lru_tail() also into amdgpu_vm_move_to_lru_tail(). v6: clean up and fix return value. Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Huang Rui <ray.huang@amd.com> Tested-by: Mike Lothian <mike@fireburn.co.uk> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de> Acked-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: use new scheduler load balancing for VMsChristian König1-4/+3
Instead of the fixed round robin use let the scheduler balance the load of page table updates. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: move vm definitions into amdgpu_vm headerHuang Rui1-0/+25
Demangle amdgpu.h. Signed-off-by: Huang Rui <ray.huang@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-07-31drm/amdgpu: add new amdgpu_vm_bo_trace_cs() function v2Christian König1-0/+1
This allows us to trace all VM ranges which should be valid inside a CS. v2: dump mappings without BO as well Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> (v1) Reviewed-by: Huang Rui <ray.huang@amd.com> (v1) Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-07-10drm/amdgpu: Add support for logging process info in amdgpu_vm.Andrey Grodzovsky1-0/+16
Add process and thread names and pids and a function to extract this info from relevant amdgpu_vm. v2: Add documentation and fix identation. v3: Add getter and setter functions for amdgpu_task_info. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Acked-by: Jim Qu <Jim.Qu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-24drm/amdgpu: move VM BOs on LRU againChristian König1-0/+3
Move all BOs belonging to a VM on the LRU with every submission. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-24drm/amdgpu: rework VM state machine lock handling v2Christian König1-3/+1
Only the moved state needs a separate spin lock protection. All other states are protected by reserving the VM anyway. v2: fix some more incorrect cases Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-18drm/amdgpu: remove unused memberChristian König1-3/+0
This lock isn't used any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-15drm/amdgpu: Add support to change mtype for 2nd part of gart BOs on GFX9Yong Zhao1-2/+3
This change prepares for a workaround in amdkfd for a GFX9 HW bug. It requires the control stack memory of compute queues, which is allocated from the second page of MQD gart BOs, to have mtype NC, rather than the default UC. Signed-off-by: Yong Zhao <yong.zhao@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-03-15drm/amdgpu: Add helper to turn an existing VM into a compute VMFelix Kuehling1-0/+1
v2: Removed updating and checking of vm->vm_context v3: Enable amdgpu_vm_clear_bo in amdgpu_vm_make_compute Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2018-03-15drm/amdgpu: Move KFD-specific fields into struct amdgpu_vmFelix Kuehling1-0/+9
Remove struct amdkfd_vm and move the fields into struct amdgpu_vm. This will allow turning a VM created by a DRM render node into a KFD VM. v2: Removed vm_context field Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2018-02-06drm/amdgpu: Fix header file dependenciesFelix Kuehling1-0/+1
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2018-02-19drm/amdgpu: reduce reserved VA sizeChristian König1-1/+1
1MB should be more than enough, currently we use about 8K. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Monk Liu <monk.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-27drm/amdgpu: drop client_id from VMChristian König1-4/+0
Use the fence context from the scheduler entity. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-27drm/amdgpu: separate VMID and PASID handlingChristian König1-41/+3
Move both into the new files amdgpu_ids.[ch]. No functional change. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-18drm/amdgpu: implement 2+1 PD support for Raven v3Christian König1-0/+6
Instead of falling back to 2 level and very limited address space use 2+1 PD support and 128TB + 512GB of virtual address space. v2: cleanup defines, rebase on top of level enum v3: fix inverted check in hardware setup Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-and-Tested-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-14drm/amdgpu: add enumerate for PDB/PTB v3Chunming Zhou1-0/+11
v2: remove SUBPTB member v3: remove last_level, use AMDGPU_VM_PTB directly instead. Signed-off-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>