aboutsummaryrefslogtreecommitdiffstats
path: root/arch/xtensa (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-06-20xtensa: change '.bss' to '.section .bss'Max Filippov1-1/+1
For some reason (ancient assembler?) the following build error is reported by the kisskb: kisskb/src/arch/xtensa/kernel/entry.S: Error: unknown pseudo-op: `.bss': => 2176 Change abbreviated '.bss' to the full '.section .bss, "aw"' to fix this error. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-06-18xtensa: xtfpga: Fix refcount leak bug in setupLiang He1-0/+1
In machine_setup(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He <windhl@126.com> Message-Id: <20220617115323.4046905-1-windhl@126.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-06-18xtensa: Fix refcount leak bug in time.cLiang He1-0/+1
In calibrate_ccount(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He <windhl@126.com> Message-Id: <20220617124432.4049006-1-windhl@126.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-23xtensa: Return true/false (not 1/0) from bool functionYang Li1-1/+1
Return boolean values ("true" or "false") instead of 1 or 0 from bool function. This fixes the following warnings from coccicheck: ./arch/xtensa/kernel/traps.c:304:10-11: WARNING: return of 0/1 in function 'check_div0' with return type bool Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Message-Id: <20220518230953.112266-1-yang.lee@linux.alibaba.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-17xtensa: improve call0 ABI probingMax Filippov4-0/+24
When call0 userspace ABI support by probing is enabled instructions that cause illegal instruction exception when PS.WOE is clear are retried with PS.WOE set before calling c-level exception handler. Record user pc at which PS.WOE was set in the fast exception handler and clear PS.WOE in the c-level exception handler if we get there from the same address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-17xtensa: support artificial division by 0 exceptionMax Filippov1-0/+23
On xtensa cores wihout hardware division option division support functions from libgcc react to division by 0 attempt by executing illegal instruction followed by the characters 'DIV0'. Recognize this pattern in illegal instruction exception handler and convert it to division by 0. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-13xtensa: use fallback for random_get_entropy() instead of zeroJason A. Donenfeld1-4/+2
In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13xtensa: add trap handler for division by zeroMax Filippov1-1/+7
Add c-level handler for the division by zero exception and kill the task if it was thrown from the kernel space or send SIGFPE otherwise. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-11ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEPEric W. Biederman2-4/+4
xtensa is the last user of the PT_SINGLESTEP flag. Changing tsk->ptrace in user_enable_single_step and user_disable_single_step without locking could potentiallly cause problems. So use a thread info flag instead of a flag in tsk->ptrace. Use TIF_SINGLESTEP that xtensa already had defined but unused. Remove the definitions of PT_SINGLESTEP and PT_BLOCKSTEP as they have no more users. Cc: stable@vger.kernel.org Acked-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11xtensa/simdisk: fix proc_read_simdisk()Yi Yang1-6/+12
The commit a69755b18774 ("xtensa simdisk: switch to proc_create_data()") split read operation into two parts, first retrieving the path when it's non-null and second retrieving the trailing '\n'. However when the path is non-null the first simple_read_from_buffer updates ppos, and the second simple_read_from_buffer returns 0 if ppos is greater than 1 (i.e. almost always). As a result reading from that proc file is almost always empty. Fix it by making a temporary copy of the path with the trailing '\n' and using simple_read_from_buffer on that copy. Cc: stable@vger.kernel.org Fixes: a69755b18774 ("xtensa simdisk: switch to proc_create_data()") Signed-off-by: Yi Yang <yiyang13@huawei.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-11xtensa: no need to initialise statics to 0Jason Wang1-1/+1
Static variables do not need to be initialised to 0, because compiler will initialise all uninitialised statics to 0. Thus, remove the unneeded initializations. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Message-Id: <20220508022910.98481-1-wangborong@cdjrlc.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-07fork: Generalize PF_IO_WORKER handlingEric W. Biederman1-6/+5
Add fn and fn_arg members into struct kernel_clone_args and test for them in copy_thread (instead of testing for PF_KTHREAD | PF_IO_WORKER). This allows any task that wants to be a user space task that only runs in kernel mode to use this functionality. The code on x86 is an exception and still retains a PF_KTHREAD test because x86 unlikely everything else handles kthreads slightly differently than user space tasks that start with a function. The functions that created tasks that start with a function have been updated to set ".fn" and ".fn_arg" instead of ".stack" and ".stack_size". These functions are fork_idle(), create_io_thread(), kernel_thread(), and user_mode_thread(). Link: https://lkml.kernel.org/r/20220506141512.516114-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-07fork: Pass struct kernel_clone_args into copy_threadEric W. Biederman1-3/+5
With io_uring we have started supporting tasks that are for most purposes user space tasks that exclusively run code in kernel mode. The kernel task that exec's init and tasks that exec user mode helpers are also user mode tasks that just run kernel code until they call kernel execve. Pass kernel_clone_args into copy_thread so these oddball tasks can be supported more cleanly and easily. v2: Fix spelling of kenrel_clone_args on h8300 Link: https://lkml.kernel.org/r/20220506141512.516114-2-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-01xtensa: clean up labels in the kernel entry assemblyMax Filippov1-47/+53
Don't use numeric labels for long jumps, use named local labels instead. Avoid conditional label definition. No functional changes. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: don't leave invalid TLB entry in fast_store_prohibitedMax Filippov1-1/+5
When fast_store_prohibited needs to go to the C-level exception handler it leaves TLB entry that caused page fault in the TLB. If the faulting task gets switched to a different CPU and completes page table update there the TLB entry will get out of sync with the page table which may cause a livelock on access to that page. Invalidate faulting TLB entry on a slow path exit from the fast_store_prohibited. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: fix declaration of _SecondaryResetVector_text_*Max Filippov1-1/+1
Secondary reset vector is defined, compiled and used when CONFIG_SECONDARY_RESET_VECTOR is enabled, not only on SMP. Make declarations of _SecondaryResetVector_text_* symbols available accordingly. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: enable ARCH_HAS_DEBUG_VM_PGTABLEMax Filippov1-0/+1
xtensa kernels successfully build and run with CONFIG_DEBUG_VM_PGTABLE=y, enable arch support for it. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: add hibernation supportMax Filippov5-0/+129
Define ARCH_HIBERNATION_POSSIBLE in Kconfig and implement hibernation callbacks. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: support coprocessors on SMPMax Filippov11-67/+230
Current coprocessor support on xtensa only works correctly on uniprocessor configurations. Make it work on SMP too and keep it lazy. Make coprocessor_owner array per-CPU and move it to struct exc_table for easy access from the fast_coprocessor exception handler. Allow task to have live coprocessors only on single CPU, record this CPU number in the struct thread_info::cp_owner_cpu. Change struct thread_info::cpenable meaning to be 'coprocessors live on cp_owner_cpu'. Introduce C-level coprocessor exception handler that flushes and releases live coprocessors of the task taking 'coprocessor disabled' exception and call it from the fast_coprocessor handler when the task has live coprocessors on other CPU. Make coprocessor_flush_all and coprocessor_release_all work correctly when called from any CPU by sending IPI to the cp_owner_cpu. Add function coprocessor_flush_release_all to do flush followed by release atomically. Add function local_coprocessors_flush_release_all to flush and release all coprocessors on the local CPU and use it to flush coprocessor contexts from the CPU that goes offline. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: get rid of stack frame in coprocessor_flushMax Filippov1-6/+5
coprocessor_flush is an ordinary function, it can use all registers. Don't reserve stack frame for it and use a7 to preserve a0 around the context saving call. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: merge SAVE_CP_REGS_TAB and LOAD_CP_REGS_TABMax Filippov1-48/+37
Both tables share the same offset field but the different function pointers. Merge them into single table with 3-element entries to reduce code and data duplication. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: add xtensa_xsr macroMax Filippov1-0/+7
xtensa_xsr does the XSR instruction for the specified special register. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: handle coprocessor exceptions in kernel modeMax Filippov1-1/+1
In order to let drivers use xtensa coprocessors on behalf of the calling process the kernel must handle coprocessor exceptions from the kernel mode the same way as from the user mode. This is not sufficient to allow using coprocessors transparently in IRQ or softirq context. Should such users exist they must be aware of the context and do the right thing, e.g. preserve the coprocessor state and resore it after use. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: use callx0 opcode in fast_coprocessorMax Filippov1-10/+8
Instead of emulating call0 in fast_coprocessor use that opcode directly. Use 'ret' instead of 'jx a0'. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: clean up excsave1 initializationMax Filippov2-4/+3
Use xtensa_set_sr instead of inline assembly. Rename local variable exc_table in early_trap_init to avoid conflict with per-CPU variable of the same name. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: clean up declarations in coprocessor.hMax Filippov1-4/+3
Drop 'extern' from all function declarations. Add parameter names in declarations. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: clean up exception handler prototypesMax Filippov3-15/+13
Exception handlers are currently passed as void pointers because they may have one or two parameters. Only two handlers uses the second parameter and it is available in the struct pt_regs anyway. Make all handlers have only one parameter, introduce xtensa_exception_handler type for handlers and use it in trap_set_handler. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: clean up function declarations in traps.cMax Filippov2-32/+32
Drop 'extern' from all function declarations and move those that need to be visible from traps.c to traps.h. Add 'asmlinkage' to declarations of fucntions defined in assembly. Add 'static' to declarations and definitions only used locally. Add argument names in declarations. Drop unused second argument from do_multihit and do_page_fault. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: enable KCSANMax Filippov6-7/+73
Prefix arch-specific barrier macros with '__' to make use of instrumented generic macros. Prefix arch-specific bitops with 'arch_' to make use of instrumented generic functions. Provide stubs for 64-bit atomics when building with KCSAN. Disable KCSAN instrumentation in arch/xtensa/boot. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Acked-by: Marco Elver <elver@google.com>
2022-05-01xtensa: enable HAVE_VIRT_CPU_ACCOUNTING_GENMax Filippov1-0/+1
There's no direct cputime_t manipulation in the xtensa arch code, so generic virt CPU accounting may be enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: enable context trackingMax Filippov2-0/+10
Put user exit context tracking call on the common kernel entry/exit path (function calls are impossible at earlier kernel entry stages because PS.EXCM is not cleared yet). Put user entry context tracking call on the user exit path. Syscalls go through this common code too, so nothing specific needs to be done for them. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: use abi_* register names in the kernel exit codeMax Filippov1-40/+42
Using plain register names is prone to errors when code is changed and new calls are added between the register load and use. Change plain register names to abi_* names in the call-heavy part of the kernel exit code to clearly indicate what's supposed to be preserved and what's not. Re-align code while at it. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: move trace_hardirqs_off call back to entry.SMax Filippov2-15/+15
Context tracking call must be done after hardirq tracking call, otherwise lockdep_assert_irqs_disabled called from rcu_eqs_exit gives a warning. To avoid context tracking logic duplication for IRQ/exception entry paths move trace_hardirqs_off call back to common entry code. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: drop dead code from entry.SMax Filippov1-18/+0
KERNEL_STACK_OVERFLOW_CHECK is incomplete and have never been enabled. Remove it. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: noMMU: allow handling protection faultsMax Filippov4-11/+27
Many xtensa CPU cores without full MMU still have memory protection features capable of raising exceptions for invalid instruction fetches/data access. Allow handling such exceptions. This improves behavior of processes that pass invalid memory pointers to syscalls in noMMU configs: in case of exception the kernel instead of killing the process is now able to return -EINVAL from a syscall. Introduce CONFIG_PFAULT that controls whether protection fault code is enabled and register handlers for common memory protection exceptions when it is enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: extract vmalloc_fault code into a functionMax Filippov1-53/+54
Move full MMU-specific code into a separate function to isolate it from more generic do_page_fault code. No functional changes. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: move asid_cache from fault.c to mmu.cMax Filippov2-1/+2
asid_cache is only useful with full MMU, but fault.c is also useful with MPU. Move asid_cache definition to MMU-specific source file. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: iss: extract and constify network callbacksMax Filippov1-19/+28
Instead of storing pointers to callback functions in the struct iss_net_private::tp move them to struct struct iss_net_ops and store a const pointer to it. Make static const tuntap_ops structure with tuntap callbacks and initialize tp.net_ops with it in the tuntap_probe. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: iss: clean up per-device locking in network driverMax Filippov1-21/+18
Per-device locking in the ISS network driver is used to protect poll timer and stats updates. Stat collection is not protected. Remove per-device locking everywhere except the stats updates. Replace ndo_get_stats callback with ndo_get_stats64 and use proper locking there as well. As a side effect this fixes possible deadlock between iss_net_close and iss_net_timer. Reported by: Duoming Zhou <duoming@zju.edu.cn> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: iss: replace iss_net_set_mac with eth_mac_addrMax Filippov1-14/+1
iss_net_set_mac is just a copy of eth_mac_addr with pointless locking. Drop this function and replace it with eth_mac_addr. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: iss: drop opened_list logic from the network driverMax Filippov1-39/+14
opened_list is used to poll all opened devices in the timer callback, but there's individual timer that is associated with each device. Drop opened_list and only poll the device that is associated with the timer in the timer callback. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-01xtensa: localize labels used in memmoveMax Filippov1-10/+10
Internal labels in the memmove implementation don't need to be visible, localize them by prefixing their names with '.L'. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-15xtensa: fix a7 clobbering in coprocessor context load/storeMax Filippov1-2/+2
Fast coprocessor exception handler saves a3..a6, but coprocessor context load/store code uses a4..a7 as temporaries, potentially clobbering a7. 'Potentially' because coprocessor state load/store macros may not use all four temporary registers (and neither FPU nor HiFi macros do). Use a3..a6 as intended. Cc: stable@vger.kernel.org Fixes: c658eac628aa ("[XTENSA] Add support for configurable registers and coprocessors") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-13arch: xtensa: platforms: Fix deadlock in rs_close()Duoming Zhou1-8/+0
There is a deadlock in rs_close(), which is shown below: (Thread 1) | (Thread 2) | rs_open() rs_close() | mod_timer() spin_lock_bh() //(1) | (wait a time) ... | rs_poll() del_timer_sync() | spin_lock() //(2) (wait timer to stop) | ... We hold timer_lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need timer_lock in position (2) of thread 2. As a result, rs_close() will block forever. This patch deletes the redundant timer_lock in order to prevent the deadlock. Because there is no race condition between rs_close, rs_open and rs_poll. Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Message-Id: <20220407154430.22387-1-duoming@zju.edu.cn> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-13xtensa: patch_text: Fixup last cpu should be masterGuo Ren1-1/+1
These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: 64711f9a47d4 ("xtensa: implement jump_label support") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: <stable@vger.kernel.org> Message-Id: <20220407073323.743224-4-guoren@kernel.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-03-31arch: syscalls: simplify uapi/kapi directory creationMasahiro Yamada1-2/+1
$(shell ...) expands to empty. There is no need to assign it to _dummy. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
2022-03-24mm: madvise: MADV_DONTNEED_LOCKEDJohannes Weiner1-0/+2
MADV_DONTNEED historically rejects mlocked ranges, but with MLOCK_ONFAULT and MCL_ONFAULT allowing to mlock without populating, there are valid use cases for depopulating locked ranges as well. Users mlock memory to protect secrets. There are allocators for secure buffers that want in-use memory generally mlocked, but cleared and invalidated memory to give up the physical pages. This could be done with explicit munlock -> mlock calls on free -> alloc of course, but that adds two unnecessary syscalls, heavy mmap_sem write locks, vma splits and re-merges - only to get rid of the backing pages. Users also mlockall(MCL_ONFAULT) to suppress sustained paging, but are okay with on-demand initial population. It seems valid to selectively free some memory during the lifetime of such a process, without having to mess with its overall policy. Why add a separate flag? Isn't this a pretty niche usecase? - MADV_DONTNEED has been bailing on locked vmas forever. It's at least conceivable that someone, somewhere is relying on mlock to protect data from perhaps broader invalidation calls. Changing this behavior now could lead to quiet data corruption. - It also clarifies expectations around MADV_FREE and maybe MADV_REMOVE. It avoids the situation where one quietly behaves different than the others. MADV_FREE_LOCKED can be added later. - The combination of mlock() and madvise() in the first place is probably niche. But where it happens, I'd say that dropping pages from a locked region once they don't contain secrets or won't page anymore is much saner than relying on mlock to protect memory from speculative or errant invalidation calls. It's just that we can't change the default behavior because of the two previous points. Given that, an explicit new flag seems to make the most sense. [hannes@cmpxchg.org: fix mips build] Link: https://lkml.kernel.org/r/20220304171912.305060-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-22xtensa: define update_mmu_tlb functionMax Filippov2-0/+10
Before the commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") there was a call to update_mmu_cache in alloc_set_pte that used to invalidate TLB entry caching invalid PTE that caused a page fault. That commit removed that call so now invalid TLB entry survives causing repetitive page faults on the CPU that took the initial fault until that TLB entry is occasionally evicted. This issue is spotted by the xtensa TLB sanity checker. Fix this issue by defining update_mmu_tlb function that flushes TLB entry for the faulting address. Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-03-21arch: Add pmd_pfn() where it is missingMike Rapoport1-0/+1
We need to use this function in common code, so define it for architectures and/or configrations that miss it. The result of pmd_pfn() will only be used if TRANSPARENT_HUGEPAGE is enabled, but a function or macro called pmd_pfn() must be defined, even on machines with two level page tables. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-20xtensa: fix xtensa_wsr always writing 0Max Filippov1-2/+2
The commit cad6fade6e78 ("xtensa: clean up WSR*/RSR*/get_sr/set_sr") replaced 'WSR' macro in the function xtensa_wsr with 'xtensa_set_sr', but variable 'v' in the xtensa_set_sr body shadowed the argument 'v' passed to it, resulting in wrong value written to debug registers. Fix that by removing intermediate variable from the xtensa_set_sr macro body. Cc: stable@vger.kernel.org Fixes: cad6fade6e78 ("xtensa: clean up WSR*/RSR*/get_sr/set_sr") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>