aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2024-02-29iommu/arm-smmu-v3: Pass smmu_domain to arm_enable/disable_ats()Jason Gunthorpe1-7/+6
The caller already has the domain, just pass it in. A following patch will remove master->domain. Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/9-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Put writing the context descriptor in the right orderJason Gunthorpe1-9/+20
Get closer to the IOMMU API ideal that changes between domains can be hitless. The ordering for the CD table entry is not entirely clean from this perspective. When switching away from a STE with a CD table programmed in it we should write the new STE first, then clear any old data in the CD entry. If we are programming a CD table for the first time to a STE then the CD entry should be programmed before the STE is loaded. If we are replacing a CD table entry when the STE already points at the CD entry then we just need to do the make/break sequence. Lift this code out of arm_smmu_detach_dev() so it can all be sequenced properly. The only other caller is arm_smmu_release_device() and it is going to free the cdtable anyhow, so it doesn't matter what is in it. Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/8-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Do not change the STE twice during arm_smmu_attach_dev()Jason Gunthorpe1-6/+9
This was needed because the STE code required the STE to be in ABORT/BYPASS inorder to program a cdtable or S2 STE. Now that the STE code can automatically handle all transitions we can remove this step from the attach_dev flow. A few small bugs exist because of this: 1) If the core code does BLOCKED -> UNMANAGED with disable_bypass=false then there will be a moment where the STE points at BYPASS. Since this can be done by VFIO/IOMMUFD it is a small security race. 2) If the core code does IDENTITY -> DMA then any IOMMU_RESV_DIRECT regions will temporarily become BLOCKED. We'd like drivers to work in a way that allows IOMMU_RESV_DIRECT to be continuously functional during these transitions. Make arm_smmu_release_device() put the STE back to the correct ABORT/BYPASS setting. Fix a bug where a IOMMU_RESV_DIRECT was ignored on this path. As noted before the reordering of the linked list/STE/CD changes is OK against concurrent arm_smmu_share_asid() because of the arm_smmu_asid_lock. Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/7-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Compute the STE only once for each masterJason Gunthorpe1-35/+22
Currently arm_smmu_install_ste_for_dev() iterates over every SID and computes from scratch an identical STE. Every SID should have the same STE contents. Turn this inside out so that the STE is supplied by the caller and arm_smmu_install_ste_for_dev() simply installs it to every SID. This is possible now that the STE generation does not inform what sequence should be used to program it. This allows splitting the STE calculation up according to the call site, which following patches will make use of, and removes the confusing NULL domain special case that only supported arm_smmu_detach_dev(). Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/6-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Hold arm_smmu_asid_lock during all of attach_devJason Gunthorpe1-9/+13
The BTM support wants to be able to change the ASID of any smmu_domain. When it goes to do this it holds the arm_smmu_asid_lock and iterates over the target domain's devices list. During attach of a S1 domain we must ensure that the devices list and CD are in sync, otherwise we could miss CD updates or a parallel CD update could push an out of date CD. This is pretty complicated, and almost works today because arm_smmu_detach_dev() removes the master from the linked list before working on the CD entries, preventing parallel update of the CD. However, it does have an issue where the CD can remain programed while the domain appears to be unattached. arm_smmu_share_asid() will then not clear any CD entriess and install its own CD entry with the same ASID concurrently. This creates a small race window where the IOMMU can see two ASIDs pointing to different translations. CPU0 CPU1 arm_smmu_attach_dev() arm_smmu_detach_dev() spin_lock_irqsave(&smmu_domain->devices_lock, flags); list_del(&master->domain_head); spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); arm_smmu_mmu_notifier_get() arm_smmu_alloc_shared_cd() arm_smmu_share_asid(): // Does nothing due to list_del above arm_smmu_update_ctx_desc_devices() arm_smmu_tlb_inv_asid() arm_smmu_write_ctx_desc() ** Now the ASID is in two CDs with different translation arm_smmu_write_ctx_desc(master, IOMMU_NO_PASID, NULL); Solve this by wrapping most of the attach flow in the arm_smmu_asid_lock. This locks more than strictly needed to prepare for the next patch which will reorganize the order of the linked list, STE and CD changes. Move arm_smmu_detach_dev() till after we have initialized the domain so the lock can be held for less time. Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Build the whole STE in arm_smmu_make_s2_domain_ste()Jason Gunthorpe2-14/+15
Half the code was living in arm_smmu_domain_finalise_s2(), just move it here and take the values directly from the pgtbl_ops instead of storing copies. Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Move the STE generation for S1 and S2 domains into functionsJason Gunthorpe1-55/+83
This is preparation to move the STE calculation higher up in to the call chain and remove arm_smmu_write_strtab_ent(). These new functions will be called directly from attach_dev. Reviewed-by: Moritz Fischer <mdf@kernel.org> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/3-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Consolidate the STE generation for abort/bypassJason Gunthorpe1-42/+55
This allows writing the flow of arm_smmu_write_strtab_ent() around abort and bypass domains more naturally. Note that the core code no longer supplies NULL domains, though there is still a flow in the driver that end up in arm_smmu_write_strtab_ent() with NULL. A later patch will remove it. Remove the duplicate calculation of the STE in arm_smmu_init_bypass_stes() and remove the force parameter. arm_smmu_rmr_install_bypass_ste() can now simply invoke arm_smmu_make_bypass_ste() directly. Rename arm_smmu_init_bypass_stes() to arm_smmu_init_initial_stes() to better reflect its purpose. Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-29iommu/arm-smmu-v3: Make STE programming independent of the callersJason Gunthorpe1-64/+211
As the comment in arm_smmu_write_strtab_ent() explains, this routine has been limited to only work correctly in certain scenarios that the caller must ensure. Generally the caller must put the STE into ABORT or BYPASS before attempting to program it to something else. The iommu core APIs would ideally expect the driver to do a hitless change of iommu_domain in a number of cases: - RESV_DIRECT support wants IDENTITY -> DMA -> IDENTITY to be hitless for the RESV ranges - PASID upgrade has IDENTIY on the RID with no PASID then a PASID paging domain installed. The RID should not be impacted - PASID downgrade has IDENTIY on the RID and all PASID's removed. The RID should not be impacted - RID does PAGING -> BLOCKING with active PASID, PASID's should not be impacted - NESTING -> NESTING for carrying all the above hitless cases in a VM into the hypervisor. To comprehensively emulate the HW in a VM we should assume the VM OS is running logic like this and expecting hitless updates to be relayed to real HW. For CD updates arm_smmu_write_ctx_desc() has a similar comment explaining how limited it is, and the driver does have a need for hitless CD updates: - SMMUv3 BTM S1 ASID re-label - SVA mm release should change the CD to answert not-present to all requests without allowing logging (EPD0) The next patches/series are going to start removing some of this logic from the callers, and add more complex state combinations than currently. At the end everything that can be hitless will be hitless, including all of the above. Introduce arm_smmu_write_ste() which will run through the multi-qword programming sequence to avoid creating an incoherent 'torn' STE in the HW caches. It automatically detects which of two algorithms to use: 1) The disruptive V=0 update described in the spec which disrupts the entry and does three syncs to make the change: - Write V=0 to QWORD 0 - Write the entire STE except QWORD 0 - Write QWORD 0 2) A hitless update algorithm that follows the same rational that the driver already uses. It is safe to change IGNORED bits that HW doesn't use: - Write the target value into all currently unused bits - Write a single QWORD, this makes the new STE live atomically - Ensure now unused bits are 0 The detection of which path to use and the implementation of the hitless update rely on a "used bitmask" describing what bits the HW is actually using based on the V/CFG/etc bits. This flows from the spec language, typically indicated as IGNORED. Knowing which bits the HW is using we can update the bits it does not use and then compute how many QWORDS need to be changed. If only one qword needs to be updated the hitless algorithm is possible. Later patches will include CD updates in this mechanism so make the implementation generic using a struct arm_smmu_entry_writer and struct arm_smmu_entry_writer_ops to abstract the differences between STE and CD to be plugged in. At this point it generates the same sequence of updates as the current code, except that zeroing the VMID on entry to BYPASS/ABORT will do an extra sync (this seems to be an existing bug). Going forward this will use a V=0 transition instead of cycling through ABORT if a hitfull change is required. This seems more appropriate as ABORT will fail DMAs without any logging, but dropping a DMA due to transient V=0 is probably signaling a bug, so the C_BAD_STE is valuable. Add STRTAB_STE_1_SHCFG_INCOMING to s2_cfg, this was editing the STE in place and subtly inherited the value of data[1] from abort/bypass. Signed-off-by: Michael Shavit <mshavit@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v6-96275f25c39d+2d4-smmuv3_newapi_p1_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-22dt-bindings: arm-smmu: Document SM8650 GPU SMMUNeil Armstrong1-2/+4
Document the GPU SMMU found on the SM8650 platform. Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20240216-topic-sm8650-gpu-v3-3-eb1f4b86d8d3@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-22dt-bindings: arm-smmu: Fix SM8[45]50 GPU SMMU 'if' conditionNeil Armstrong1-2/+11
The 'if' condition for the SM8[45]50 GPU SMMU is too large, add the other compatible strings to the condition to only allow the clocks for the GPU SMMU nodes. Fixes: 4fff78dc2490 ("dt-bindings: arm-smmu: Document SM8[45]50 GPU SMMU") Suggested-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20240216-topic-sm8650-gpu-v3-2-eb1f4b86d8d3@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-22iommu/arm-smmu-qcom: Add X1E80100 MDSS compatibleAbel Vesa1-0/+1
Add the X1E80100 MDSS compatible to clients compatible list, as it also needs the workarounds. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Link: https://lore.kernel.org/r/20240131-x1e80100-iommu-arm-smmu-qcom-v1-1-c1240419c718@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-22dt-bindings: arm-smmu: Add QCM2290 GPU SMMUKonrad Dybcio1-1/+2
The GPU SMMU on QCM2290 nicely fits into the description we already have for SM61[12]5. Add it. Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20240219-topic-rb1_gpu-v1-1-d260fa854707@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-02-22iommu/arm-smmu-v3: Do not use GFP_KERNEL under as spinlockJason Gunthorpe1-26/+12
If the SMMU is configured to use a two level CD table then arm_smmu_write_ctx_desc() allocates a CD table leaf internally using GFP_KERNEL. Due to recent changes this is being done under a spinlock to iterate over the device list - thus it will trigger a sleeping while atomic warning: arm_smmu_sva_set_dev_pasid() mutex_lock(&sva_lock); __arm_smmu_sva_bind() arm_smmu_mmu_notifier_get() spin_lock_irqsave() arm_smmu_write_ctx_desc() arm_smmu_get_cd_ptr() arm_smmu_alloc_cd_leaf_table() dmam_alloc_coherent(GFP_KERNEL) This is a 64K high order allocation and really should not be done atomically. At the moment the rework of the SVA to follow the new API is half finished. Recently the CD table memory was moved from the domain to the master, however we have the confusing situation where the SVA code is wrongly using the RID domains device's list to track which CD tables the SVA is installed in. Remove the logic to replicate the CD across all the domain's masters during attach. We know which master and which CD table the PASID should be installed in. Right now SVA only works when dma-iommu.c is in control of the RID translation, which means we have a single iommu_domain shared across the entire group and that iommu_domain is not shared outside the group. Critically this means that the iommu_group->devices list and RID's smmu_domain->devices list describe the same set of masters. For PCI cases the core code also insists on singleton groups so there is only one entry in the smmu_domain->devices list that is equal to the master being passed in to arm_smmu_sva_set_dev_pasid(). Only non-PCI cases may have multi-device groups. However, the core code will repeat the calls to arm_smmu_sva_set_dev_pasid() across the entire iommu_group->devices list. Instead of having arm_smmu_mmu_notifier_get() indirectly loop over all the devices in the group via the RID's smmu_domain, rely on __arm_smmu_sva_bind() to be called for each device in the group and install the repeated CD entry that way. This avoids taking the spinlock to access the devices list and permits the arm_smmu_write_ctx_desc() to use a sleeping allocation. Leave the arm_smmu_mm_release() as a confusing situation, this requires tracking attached masters inside the SVA domain. Removing the loop allows arm_smmu_write_ctx_desc() to be called outside the spinlock and thus is safe to use GFP_KERNEL. Move the clearing of the CD into arm_smmu_sva_remove_dev_pasid() so that arm_smmu_mmu_notifier_get/put() remain paired functions. Fixes: 24503148c545 ("iommu/arm-smmu-v3: Refactor write_ctx_desc") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/all/4e25d161-0cf8-4050-9aa3-dfa21cd63e56@moroto.mountain/ Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Michael Shavit <mshavit@google.com> Link: https://lore.kernel.org/r/0-v3-11978fc67151+112-smmu_cd_atomic_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-02-13Revert "iommu/arm-smmu: Convert to domain_alloc_paging()"Dmitry Baryshkov1-11/+6
This reverts commit 9b3febc3a3da ("iommu/arm-smmu: Convert to domain_alloc_paging()"). It breaks Qualcomm MSM8996 platform. Calling arm_smmu_write_context_bank() from new codepath results in the platform being reset because of the unclocked hardware access. Fixes: 9b3febc3a3da ("iommu/arm-smmu: Convert to domain_alloc_paging()") Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20240213-iommu-revert-domain-alloc-v1-1-325ff55dece4@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-01-28Linux 6.8-rc2Linus Torvalds1-1/+1
2024-01-27mips: Call lose_fpu(0) before initializing fcr31 in mips_set_personality_nanXi Ruoyao1-0/+6
If we still own the FPU after initializing fcr31, when we are preempted the dirty value in the FPU will be read out and stored into fcr31, clobbering our setting. This can cause an improper floating-point environment after execve(). For example: zsh% cat measure.c #include <fenv.h> int main() { return fetestexcept(FE_INEXACT); } zsh% cc measure.c -o measure -lm zsh% echo $((1.0/3)) # raising FE_INEXACT 0.33333333333333331 zsh% while ./measure; do ; done (stopped in seconds) Call lose_fpu(0) before setting fcr31 to prevent this. Closes: https://lore.kernel.org/linux-mips/7a6aa1bbdbbe2e63ae96ff163fab0349f58f1b9e.camel@xry111.site/ Fixes: 9b26616c8d9d ("MIPS: Respect the ISA level in FCSR handling") Cc: stable@vger.kernel.org Signed-off-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-27MIPS: loongson64: set nid for reserved memblock regionHuang Pei2-0/+5
Commit 61167ad5fecd("mm: pass nid to reserve_bootmem_region()") reveals that reserved memblock regions have no valid node id set, just set it right since loongson64 firmware makes it clear in memory layout info. This works around booting failure on 3A1000+ since commit 61167ad5fecd ("mm: pass nid to reserve_bootmem_region()") under CONFIG_DEFERRED_STRUCT_PAGE_INIT. Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-27Revert "MIPS: loongson64: set nid for reserved memblock region"Thomas Bogendoerfer2-4/+0
This reverts commit ce7b1b97776ec0b068c4dd6b6dbb48ae09a23519. Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-26platform/x86: touchscreen_dmi: Add info for the TECLAST X16 Plus tabletPhoenix Chen1-0/+35
Add touch screen info for TECLAST X16 Plus tablet. Signed-off-by: Phoenix Chen <asbeltogf@gmail.com> Link: https://lore.kernel.org/r/20240126095308.5042-1-asbeltogf@gmail.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-01-26platform/x86/intel/ifs: Call release_firmware() when handling errors.Jithu Joseph1-1/+2
Missing release_firmware() due to error handling blocked any future image loading. Fix the return code and release_fiwmare() to release the bad image. Fixes: 25a76dbb36dd ("platform/x86/intel/ifs: Validate image size") Reported-by: Pengfei Xu <pengfei.xu@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20240125082254.424859-2-ashok.raj@intel.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-01-26platform/x86/amd/pmf: Fix memory leak in amd_pmf_get_pb_data()Cong Liu1-1/+3
amd_pmf_get_pb_data() will allocate memory for the policy buffer, but does not free it if copy_from_user() fails. This leads to a memory leak. Fixes: 10817f28e533 ("platform/x86/amd/pmf: Add capability to sideload of policy binary") Reviewed-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Signed-off-by: Cong Liu <liucong2@kylinos.cn> Link: https://lore.kernel.org/r/20240124012939.6550-1-liucong2@kylinos.cn Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-01-26platform/x86/amd/pmf: Get ambient light information from AMD SFH driverShyam Sundar S K1-0/+8
AMD SFH driver has APIs defined to export the ambient light information; use this within the PMF driver to send inputs to the PMF TA, so that PMF driver can enact to the actions coming from the TA. Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://lore.kernel.org/r/20240123141458.3715211-2-Shyam-sundar.S-k@amd.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-01-26platform/x86/amd/pmf: Get Human presence information from AMD SFH driverShyam Sundar S K2-0/+29
AMD SFH driver has APIs defined to export the human presence information; use this within the PMF driver to send inputs to the PMF TA, so that PMF driver can enact to the actions coming from the TA. Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://lore.kernel.org/r/20240123141458.3715211-1-Shyam-sundar.S-k@amd.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-01-27Revert "nouveau: push event block/allowing out of the fence context"Dave Airlie2-27/+6
This reverts commit eacabb5462717a52fccbbbba458365a4f5e61f35. This commit causes some regressions in desktop usage, this will reintroduce the original deadlock in DRI_PRIME situations, I've got an idea to fix it by offloading to a workqueue in a different spot, however this code has a race condition where we sometimes miss interrupts so I'd like to fix that as well. Cc: stable@vger.kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
2024-01-26MAINTAINERS: Add Andreas Larsson as co-maintainer for arch/sparcAndreas Larsson1-0/+1
Dave has not been very active on arch/sparc for the past two years. I have been contributing to the SPARC32 port as well as maintaining out-of-tree SPARC32 patches for LEON3/4/5 (SPARCv8 with CAS support) since 2012. I am willing to step up as an arch/sparc (co-)maintainer. For recent discussions on the matter, see [1] and [2]. [1] https://lore.kernel.org/r/20230713075235.2164609-1-u.kleine-koenig@pengutronix.de [2] https://lore.kernel.org/r/20231209105816.GA1085691@ravnborg.org/ Signed-off-by: Andreas Larsson <andreas@gaisler.com> Suggested-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Acked-by: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-01-26drm: bridge: samsung-dsim: Don't use FORCE_STOP_STATEMichael Walle1-30/+2
The FORCE_STOP_STATE bit is unsuitable to force the DSI link into LP-11 mode. It seems the bridge internally queues DSI packets and when the FORCE_STOP_STATE bit is cleared, they are sent in close succession without any useful timing (this also means that the DSI lanes won't go into LP-11 mode). The length of this gibberish varies between 1ms and 5ms. This sometimes breaks an attached bridge (TI SN65DSI84 in this case). In our case, the bridge will fail in about 1 per 500 reboots. The FORCE_STOP_STATE handling was introduced to have the DSI lanes in LP-11 state during the .pre_enable phase. But as it turns out, none of this is needed at all. Between samsung_dsim_init() and samsung_dsim_set_display_enable() the lanes are already in LP-11 mode. The code as it was before commit 20c827683de0 ("drm: bridge: samsung-dsim: Fix init during host transfer") and 0c14d3130654 ("drm: bridge: samsung-dsim: Fix i.MX8M enable flow to meet spec") was correct in this regard. This patch basically reverts both commits. It was tested on an i.MX8M SoC with an SN65DSI84 bridge. The signals were probed and the DSI packets were decoded during initialization and link start-up. After this patch the first DSI packet on the link is a VSYNC packet and the timing is correct. Command mode between .pre_enable and .enable was also briefly tested by a quick hack. There was no DSI link partner which would have responded, but it was made sure the DSI packet was send on the link. As a side note, the command mode seems to just work in HS mode. I couldn't find that the bridge will handle commands in LP mode. Fixes: 20c827683de0 ("drm: bridge: samsung-dsim: Fix init during host transfer") Fixes: 0c14d3130654 ("drm: bridge: samsung-dsim: Fix i.MX8M enable flow to meet spec") Signed-off-by: Michael Walle <mwalle@kernel.org> Signed-off-by: Inki Dae <inki.dae@samsung.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231113164344.1612602-1-mwalle@kernel.org
2024-01-26riscv: dts: sophgo: separate sg2042 mtime and mtimecmp to fit aclint formatInochi Amaoto1-32/+48
Change the timer layout in the dtb to fit the format that needed by the SBI. Fixes: 967a94a92aaa ("riscv: dts: add initial Sophgo SG2042 SoC device tree") Reviewed-by: Chen Wang <unicorn_wang@outlook.com> Reviewed-by: Guo Ren <guoren@kernel.org> Signed-off-by: Inochi Amaoto <inochiama@outlook.com> Signed-off-by: Chen Wang <unicorn_wang@outlook.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-01-26MIPS: lantiq: register smp_ops on non-smp platformsAleksander Jan Bajkowski1-4/+3
Lantiq uses a common kernel config for devices with 24Kc and 34Kc cores. The changes made previously to add support for interrupts on all cores work on 24Kc platforms with SMP disabled and 34Kc platforms with SMP enabled. This patch fixes boot issues on Danube (single core 24Kc) with SMP enabled. Fixes: 730320fd770d ("MIPS: lantiq: enable all hardware interrupts on second VPE") Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-26MIPS: loongson64: set nid for reserved memblock regionHuang Pei2-0/+4
Commit 61167ad5fecd("mm: pass nid to reserve_bootmem_region()") reveals that reserved memblock regions have no valid node id set, just set it right since loongson64 firmware makes it clear in memory layout info. This works around booting failure on 3A1000+ since commit 61167ad5fecd ("mm: pass nid to reserve_bootmem_region()") under CONFIG_DEFERRED_STRUCT_PAGE_INIT. Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-26MIPS: reserve exception vector space ONLY ONCEHuang Pei1-1/+7
"cpu_probe" is called both by BP and APs, but reserving exception vector (like 0x0-0x1000) called by "cpu_probe" need once and calling on APs is too late since memblock is unavailable at that time. So, reserve exception vector ONLY by BP. Suggested-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-26MIPS: BCM63XX: Fix missing prototypesFlorian Fainelli7-6/+7
Most of the symbols for which we do not have a prototype can actually be made static and for the few that cannot, there is already a declaration in a header for it. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-01-26LoongArch: KVM: Add returns to SIMD stubsRandy Dunlap1-2/+2
The stubs for kvm_own/lsx()/kvm_own_lasx() when CONFIG_CPU_HAS_LSX or CONFIG_CPU_HAS_LASX is not defined should have a return value since they return an int, so add "return -EINVAL;" to the stubs. Fixes the build error: In file included from ../arch/loongarch/include/asm/kvm_csr.h:12, from ../arch/loongarch/kvm/interrupt.c:8: ../arch/loongarch/include/asm/kvm_vcpu.h: In function 'kvm_own_lasx': ../arch/loongarch/include/asm/kvm_vcpu.h:73:39: error: no return statement in function returning non-void [-Werror=return-type] 73 | static inline int kvm_own_lasx(struct kvm_vcpu *vcpu) { } Fixes: db1ecca22edf ("LoongArch: KVM: Add LSX (128bit SIMD) support") Fixes: 118e10cd893d ("LoongArch: KVM: Add LASX (256bit SIMD) support") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-01-26LoongArch: KVM: Fix build due to API changesHuacai Chen1-2/+2
Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes to fix build. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-01-26LoongArch/smp: Call rcutree_report_cpu_starting() at tlb_init()Huacai Chen2-7/+10
Machines which have more than 8 nodes fail to boot SMP after commit a2ccf46333d7b2cf96 ("LoongArch/smp: Call rcutree_report_cpu_starting() earlier"). Because such machines use tlb-based per-cpu base address rather than dmw-based per-cpu base address, resulting per-cpu variables can only be accessed after tlb_init(). But rcutree_report_cpu_starting() is now called before tlb_init() and accesses per-cpu variables indeed. Since the original patch want to avoid the lockdep warning caused by page allocation in tlb_init(), we can move rcutree_report_cpu_starting() to tlb_init() where after tlb exception configuration but before page allocation. Fixes: a2ccf46333d7b2cf96 ("LoongArch/smp: Call rcutree_report_cpu_starting() earlier") Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-01-26drm/sched: Drain all entities in DRM sched run job workerMatthew Brost1-8/+7
All entities must be drained in the DRM scheduler run job worker to avoid the following case. An entity found that is ready, no job found ready on entity, and run job worker goes idle with other entities + jobs ready. Draining all ready entities (i.e. loop over all ready entities) in the run job worker ensures all job that are ready will be scheduled. Cc: Thorsten Leemhuis <regressions@leemhuis.info> Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Closes: https://lore.kernel.org/all/CABXGCsM2VLs489CH-vF-1539-s3in37=bwuOWtoeeE+q26zE+Q@mail.gmail.com/ Reported-and-tested-by: Mario Limonciello <mario.limonciello@amd.com> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3124 Link: https://lore.kernel.org/all/20240123021155.2775-1-mario.limonciello@amd.com/ Reported-and-tested-by: Vlastimil Babka <vbabka@suse.cz> Closes: https://lore.kernel.org/dri-devel/05ddb2da-b182-4791-8ef7-82179fd159a8@amd.com/T/#m0c31d4d1b9ae9995bb880974c4f1dbaddc33a48a Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240124210811.1639040-1-matthew.brost@intel.com
2024-01-25bcachefs: __lookup_dirent() works in snapshot, not subvolKent Overstreet2-18/+27
Add a new helper, bch2_hash_lookup_in_snapshot(), for when we're not operating in a subvolume and already have a snapshot ID, and then use it in lookup_lostfound() -> __lookup_dirent(). This is a bugfix - lookup_lostfound() doesn't take a subvolume ID, we were passing a nonsense subvolume ID before, and don't have one to pass since we may be operating in an interior snapshot node that doesn't have a subvolume ID. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-01-25spi: fix finalize message on error returnDavid Lechner1-0/+4
In __spi_pump_transfer_message(), the message was not finalized in the first error return as it is in the other error return paths. Not finalizing the message could cause anything waiting on the message to complete to hang forever. This adds the missing call to spi_finalize_current_message(). Fixes: ae7d2346dc89 ("spi: Don't use the message queue if possible in spi_sync") Signed-off-by: David Lechner <dlechner@baylibre.com> Link: https://msgid.link/r/20240125205312.3458541-2-dlechner@baylibre.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-01-25drm/amd/display: "Enable IPS by default"Roman Li1-1/+2
[Why] IPS was temporary disabled due to instability. It was fixed in dmub firmware and with: - "drm/amd/display: Add IPS checks before dcn register access" - "drm/amd/display: Disable ips before dc interrupt setting" [How] Enable IPS by default. Disable IPS if 0x800 bit set in amdgpu.dcdebugmask module params Signed-off-by: Roman Li <Roman.Li@amd.com> Tested-by: Mark Broadworth <mark.broadworth@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd: Add a DC debug mask for IPSRoman Li1-0/+1
For debugging IPS-related issues, expose a new debug mask that allows to disable IPS. Usage: amdgpu.dcdebugmask=0x800 Signed-off-by: Roman Li <Roman.Li@amd.com> Tested-by: Mark Broadworth <mark.broadworth@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd/display: Disable ips before dc interrupt settingRoman Li1-1/+4
[Why] While in IPS2 an access to dcn registers is not allowed. If interrupt results in dc call, we should disable IPS. [How] Safeguard register access in IPS2 by disabling idle optimization before calling dc interrupt setting api. Signed-off-by: Roman Li <Roman.Li@amd.com> Tested-by: Mark Broadworth <mark.broadworth@amd.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd/display: Replay + IPS + ABM in Full Screen VPBChunTao Tso4-0/+57
[Why] Because ABM will wait VStart to start getting histogram data, it will cause we can't enter IPS while full screnn video playing. [How] Modify the panel refresh rate to the maximun multiple of current refresh rate. Reviewed-by: Dennis Chan <dennis.chan@amd.com> Acked-by: Roman Li <roman.li@amd.com> Signed-off-by: ChunTao Tso <chuntao.tso@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd/display: Add IPS checks before dcn register accessRoman Li1-10/+6
[Why] With IPS enabled a system hangs once PSR is active. PSR active triggers transition to IPS2 state. While in IPS2 an access to dcn registers results in hard hang. Existing check doesn't cover for PSR sequence. [How] Safeguard register access by disabling idle optimization in atomic commit and crtc scanout. It will be re-enabled on next vblank. Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Acked-by: Roman Li <roman.li@amd.com> Signed-off-by: Roman Li <roman.li@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd/display: Add Replay IPS register for DMUB command tableAlvin Lee1-0/+1
- Introduce a new Replay mode for DMUB version 0.0.199.0 Reviewed-by: Martin Leung <martin.leung@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Alvin Lee <alvin.lee2@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amd/display: Allow IPS2 during ReplayNicholas Kazlauskas3-1/+11
[Why & How] Add regkey to block video playback in IPS2 by default Allow idle optimizations in the same spot we allow Replay for video playback usecases. Avoid sending it when there's an external display connected by modifying the allow idle checks to check for active non-eDP screens. Reviewed-by: Charlene Liu <charlene.liu@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-01-25drm/amdgpu/gfx11: set UNORD_DISPATCH in compute MQDsAlex Deucher2-1/+2
This needs to be set to 1 to avoid a potential deadlock in the GC 10.x and newer. On GC 9.x and older, this needs to be set to 0. This can lead to hangs in some mixed graphics and compute workloads. Updated firmware is also required for AQL. Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2024-01-25drm/amdgpu/gfx10: set UNORD_DISPATCH in compute MQDsAlex Deucher2-1/+2
This needs to be set to 1 to avoid a potential deadlock in the GC 10.x and newer. On GC 9.x and older, this needs to be set to 0. This can lead to hangs in some mixed graphics and compute workloads. Updated firmware is also required for AQL. Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2024-01-25drm/amd/amdgpu: Assign GART pages to AMD device mappingTom St Denis1-0/+8
This allows kernel mapped pages like the PDB and PTB to be read via the iomem debugfs when there is no vram in the system. Signed-off-by: Tom St Denis <tom.stdenis@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org # 6.7.x
2024-01-25drm/amd/pm: Fetch current power limit from FWLijo Lazar1-0/+1
Power limit of SMUv13.0.6 SOCs can be updated by out-of-band ways. Fetch the limit from firmware instead of using cached values. Signed-off-by: Lijo Lazar <lijo.lazar@amd.com> Reviewed-by: Asad Kamal <asad.kamal@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org # 6.7.x
2024-01-25drm/amdgpu: Fix null pointer dereferenceHawking Zhang1-1/+1
amdgpu_reg_state_sysfs_fini could be invoked at the time when asic_func is even not initialized, i.e., amdgpu_discovery_init fails for some reason. Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: Lijo Lazar <lijo.lazar@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org