aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arc/kernel/smp.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-10-17ARC: Fix comment typoJilin Yuan1-1/+1
- Remove one of the repeated 'call' in comment line 396. - Delete the redundant word 'to', 'since' Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2022-07-29profile: setup_profiling_timer() is moslty not implementedBen Dooks1-8/+0
The setup_profiling_timer() is mostly un-implemented by many architectures. In many places it isn't guarded by CONFIG_PROFILE which is needed for it to be used. Make it a weak symbol in kernel/profile.c and remove the 'return -EINVAL' implementations from the kenrel. There are a couple of architectures which do return 0 from the setup_profiling_timer() function but they don't seem to do anything else with it. To keep the /proc compatibility for now, leave these for a future update or removal. On ARM, this fixes the following sparse warning: arch/arm/kernel/smp.c:793:5: warning: symbol 'setup_profiling_timer' was not declared. Should it be static? Link: https://lkml.kernel.org/r/20220721195509.418205-1-ben-linux@fluff.org Signed-off-by: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-04-18ARC: remove redundant READ_ONCE() in cmpxchg loopBang Li1-1/+1
This patch reverts commit 7082a29c22ac ("ARC: use ACCESS_ONCE in cmpxchg loop"). It is not necessary to use READ_ONCE() because cmpxchg contains barrier. We can get it from commit d57f727264f1 ("ARC: add compiler barrier to LLSC based cmpxchg"). Signed-off-by: Bang Li <libang.linuxer@gmail.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2022-04-18ARC: fix typos in commentsJulia Lawall1-1/+1
Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: switch to generic bitopsVineet Gupta1-2/+0
- !LLSC now only needs a single spinlock for atomics and bitops - Some codegen changes (slight bloat) with generic bitops 1. code increase due to LD-check-atomic paradigm vs. unconditonal atomic (but dirty'ing the cache line even if set already). So despite increase, generic is right thing to do. 2. code decrease (but use of costlier instructions such as DIV vs. shifts based math) due to signed arithmetic. This needs to be revisited seperately. arc: static inline int test_bit(unsigned int nr, const volatile unsigned long *addr) ^^^^^^^^^^^^ generic: static inline int test_bit(int nr, const volatile unsigned long *addr) ^^^ Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> [vgupta: wrote patch based on Will's poc, analysed codegen diffs] Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24arch/arc/kernel/: fix misspellings using codespell toolChangcheng Deng1-1/+1
Some typos are found out by codespell tool: ./intc-compact.c:145: prioity ==> priority ./smp.c:286: recevier ==> receiver ./stacktrace.c:152 prelogue ==> prologue Fix typos found by codespell. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-05-12sched/core: Initialize the idle task with preemption disabledValentin Schneider1-1/+0
As pointed out by commit de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread") init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them. As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu(). Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init(). Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get(). Secondary startups were patched via coccinelle: @begone@ @@ -preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
2020-10-05ARC: SMP: fix typo and use "come up" instead of "comeup"Mike Rapoport1-1/+1
When a secondary CPU fails to come up, there is a missing space in the log: Timeout: CPU1 FAILED to comeup !!! Fix it. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-28ARC: setup cpu possible mask according to possible-cpus dts propertyEugeniy Paltsev1-10/+40
As we have option in u-boot to set CPU mask for running linux, we want to pass information to kernel about CPU cores should be brought up. So we patch kernel dtb in u-boot to set possible-cpus property. This also allows us to have correctly setuped MCIP debug mask. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-11-07Merge branch 'linus' into locking/core, to resolve conflictsIngo Molnar1-0/+5
Conflicts: include/linux/compiler-clang.h include/linux/compiler-gcc.h include/linux/compiler-intel.h include/uapi/linux/stddef.h Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-25locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()Mark Rutland1-1/+1
Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output. For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn. However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script: ---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE() // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch virtual patch @ depends on patch @ expression E1, E2; @@ - ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2) @ depends on patch @ expression E; @@ - ACCESS_ONCE(E) + READ_ONCE(E) ---- Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: davem@davemloft.net Cc: linux-arch@vger.kernel.org Cc: mpe@ellerman.id.au Cc: shuah@kernel.org Cc: snitzer@redhat.com Cc: thor.thayer@linux.intel.com Cc: tj@kernel.org Cc: viro@zeniv.linux.org.uk Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-11ARC: unbork module link errors with !CONFIG_ARC_HAS_LLSCVineet Gupta1-0/+5
| SYSMAP System.map | Building modules, stage 2. | MODPOST 18 modules |ERROR: "smp_atomic_ops_lock" [drivers/gpu/drm/drm_kms_helper.ko] undefined! |ERROR: "smp_bitops_lock" [drivers/gpu/drm/drm_kms_helper.ko] undefined! |ERROR: "smp_atomic_ops_lock" [drivers/gpu/drm/drm.ko] undefined! | ERROR: "smp_bitops_lock" [drivers/gpu/drm/drm.ko] undefined! |../scripts/Makefile.modpost:91: recipe for target '__modpost' failed Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-03-03sched/headers: Move task->mm handling methods to <linux/sched/mm.h>Ingo Molnar1-1/+1
Move the following task->mm helper APIs into a new header file, <linux/sched/mm.h>, to further reduce the size and complexity of <linux/sched.h>. Here are how the APIs are used in various kernel files: # mm_alloc(): arch/arm/mach-rpc/ecard.c fs/exec.c include/linux/sched/mm.h kernel/fork.c # __mmdrop(): arch/arc/include/asm/mmu_context.h include/linux/sched/mm.h kernel/fork.c # mmdrop(): arch/arm/mach-rpc/ecard.c arch/m68k/sun3/mmu_emu.c arch/x86/mm/tlb.c drivers/gpu/drm/amd/amdkfd/kfd_process.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/hw/hfi1/file_ops.c drivers/vfio/vfio_iommu_spapr_tce.c fs/exec.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/mmu_notifier.h include/linux/sched/mm.h kernel/fork.c kernel/futex.c kernel/sched/core.c mm/khugepaged.c mm/ksm.c mm/mmu_context.c mm/mmu_notifier.c mm/oom_kill.c virt/kvm/kvm_main.c # mmdrop_async_fn(): include/linux/sched/mm.h # mmdrop_async(): include/linux/sched/mm.h kernel/fork.c # mmget_not_zero(): fs/userfaultfd.c include/linux/sched/mm.h mm/oom_kill.c # mmput(): arch/arc/include/asm/mmu_context.h arch/arc/kernel/troubleshoot.c arch/frv/mm/mmu-context.c arch/powerpc/platforms/cell/spufs/context.c arch/sparc/include/asm/mmu_context_32.h drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/core/uverbs_main.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/exec.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/events/uprobes.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/oom_kill.c mm/process_vm_access.c mm/rmap.c mm/swapfile.c mm/util.c virt/kvm/async_pf.c # mmput_async(): include/linux/sched/mm.h kernel/fork.c mm/oom_kill.c # get_task_mm(): arch/arc/kernel/troubleshoot.c arch/powerpc/platforms/cell/spufs/context.c drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/util.c # mm_access(): fs/proc/base.c include/linux/sched/mm.h kernel/fork.c mm/process_vm_access.c # mm_release(): arch/arc/include/asm/mmu_context.h fs/exec.c include/linux/sched/mm.h include/uapi/linux/sched.h kernel/exit.c kernel/fork.c Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-27mm: add new mmget() helperVegard Nossum1-1/+1
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using: git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_users);/mmget\(\1\);/' git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_users);/mmget\(\&\1\);/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. (Michal Hocko provided most of the kerneldoc comment.) Link: http://lkml.kernel.org/r/20161218123229.22952-2-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-27mm: add new mmgrab() helperVegard Nossum1-1/+1
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using: git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/' git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. (Michal Hocko provided most of the kerneldoc comment.) Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-01-24ARCv2: smp-boot: wake_flag polling by non-Masters needs to be uncachedVineet Gupta1-3/+16
This is needed on HS38 cores, for setting up IO-Coherency aperture properly The polling could perturb the caches and coherecy fabric which could be wrong in the small window when Master is setting up IOC aperture etc in arc_cache_init() We do it only for ARCv2 based builds to not affect EZChip ARCompact based platform. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-01-24ARC: smp-boot: Decouple Non masters waiting API from jump to entry pointVineet Gupta1-2/+4
For run-on-reset SMP configs, non master cores call a routine which waits until Master gives it a "go" signal (currently using a shared mem flag). The same routine then jumps off the well known entry point of all non Master cores i.e. @first_lines_of_secondary This patch moves out the last part into one single place in early boot code. This is better in terms of absraction (the wait API only waits) and returns, leaving out the "jump off to" part. In actual implementation this requires some restructuring of the early boot code as well as Master now jumps to BSS setup explicitly, vs. falling thru into it before. Technically this patch doesn't cause any functional change, it just moves the ugly #ifdef'ry from assembly code to "C" Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-08ARC: IRQ: Do not use hwirq as virq and vice versaYuriy Kolerov1-4/+9
This came up when reviewing code to address missing IRQ affinity setting in AXS103 platform and/or implementing hierarchical IRQ domains - smp_ipi_irq_setup() callers pass hwirq but in turn calls request_percpu_irq() which expects a linux virq. So invoke irq_find_mapping() to do the conversion (also explicitify this in code by renaming the args appropriately) - idu_of_init()/idu_cascade_isr() were similarly using linux virq where hwirq is expected, so do the conversion using irqd_to_hwirq() helper Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com> [vgupta: made changelog a bit concise a bit] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-10-31ARC: [SMP] avoid overriding present cpumaskNoam Camus1-4/+6
At smp_prepare_cpus() we set present cpu mask as part of init for all CPUs at range [0-max_cpus]. This is done without checking if this mask is already being set. At platform of eznps this mask is already being initialized at smp_init_cpus() by using hook plat_smp_ops.init_early_smp(). So to avoid overriding of present cpu mask we check the number of bits which are set in this mask. At the begin only bit for boot CPU is set so if number of bits already set is no more than one we can be assure that there is no overriding of this mask. Signed-off-by: Noam Camus <noamca@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: Mark secondary cpu online only after all HW setup is doneNoam Camus1-5/+5
In SMP setup, master loops for each_present_cpu calling cpu_up(). For ARC it returns as soon as new cpu's status becomes online, However secondary may still do HW initializing, machine or platform hook level. So turn secondary online only after all HW setup is done. Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: clockevent: switch to cpu notifier for clockevent setupNoam Camus1-2/+0
ARC Timers so far have been handled as "legacy" w/o explicit description in DT. This poses challenge for newer platforms wanting to use them. This series will eventually help move timers over to DT. This patch does a small change of using a CPU notifier to set clockevent on non-boot CPUs. So explicit setup is done only on boot CPU (which will later be done by DT) Signed-off-by: Noam Camus <noamc@ezchip.com> [vgupta: broken off from a bigger patch] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: opencode arc_request_percpu_irqVineet Gupta1-1/+14
- The idea is to remove the API usage since it has a subltle design flaw - relies on being called on cpu0 first. This is true for some early per cpu irqs such as TIMER/IPI, but not for late probed per cpu peripherals such a perf. And it's usage in perf has already bitten us once: see c6317bc7c5ab ("ARCv2: perf: Ensure perf intr gets enabled on all cores") where we ended up open coding it anyways - The seeming duplication will go away once we start using cpu notifier for timer setup Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-01arch/hotplug: Call into idle with a proper stateThomas Gleixner1-1/+1
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-02-24ARC: SMP: No need for CONFIG_ARC_IPI_DBGVineet Gupta1-3/+0
This was more relevant during SMP bringup. The warning for bogus msg better be visible always. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17ARC: smp: Rename platform hook @init_cpu_smp -> @init_per_cpuVineet Gupta1-2/+2
Makes it similar to smp_ops which also has callback with same name Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17ARC: rename smp operation init_irq_cpu() to init_per_cpu()Noam Camus1-2/+2
This will better reflect its description i.e. "any needed setup..." and not just do an "IPI request". Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: smp: Introduce smp hook @init_irq_cpu called for all coresVineet Gupta1-0/+4
Note this is not part of platform owned static machine_desc, but more of device owned plat_smp_ops (rather misnamed) which a IPI provider or some such typically defines. This will help us seperate out the IPI registration from platform specific init_cpu_smp() into device specific init_irq_cpu() Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: smp: Rename platform hook @init_smp -> @init_cpu_smpVineet Gupta1-2/+2
This conveys better that it is called for each cpu Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: smp: Introduce smp hook @init_early_smp for Master coreVineet Gupta1-2/+10
This adds a platform agnostic early SMP init hook which is called on Master core before calling setup_processor() setup_arch() smp_init_cpus() smp_ops.init_early_smp() ... setup_processor() How this helps: - Used for one time init of certain SMP centric IP blocks, before calling setup_processor() which probes various bits of core, possibly including this block - Currently platforms need to call this IP block init from their init routines, which doesn't make sense as this is specific to ARC core and not platform and otherwise requires copy/paste in all (and hence a possible point of failure) e.g. MCIP init is called from 2 platforms currently (axs10x and sim) which will go away once we have this. This change only adds the hooks but they are empty for now. Next commit will populate them and remove the explicit init calls from platforms. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: smp: Move default boot kick/wait code out of MCIP into common codeVineet Gupta1-25/+21
For non halt-on-reset case, all cores start of simultaneously in @stext. Master core0 proceeds with kernel boot, while other spin-wait on @wake_flag being set by master once it is ready. So NO hardware assist is needed for master to "kick" the others. This patch moves this soft implementation out of mcip.c (as there is no hardware assist) into common smp.c Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: SMP: ARConnect debug/robustnessVineet Gupta1-4/+16
- Handle possible interrupt coalescing from MCIP - chk if prev IPI ack before sending new Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARC: make plat_smp_ops weak to allow over-ridesVineet Gupta1-1/+1
This allows platforms to provide their own cpu wakeup routines as well as IPI send / clear backends, while allowing a SMP kernel w/o any such backend to build/boot Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-19ARC: fix section mismatch with allyesconfigVineet Gupta1-1/+1
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-02-02ARC: use ACCESS_ONCE in cmpxchg loopVineet Gupta1-1/+1
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-12-12ARC: R-M-W assist locks only needed for !LLSCVineet Gupta1-0/+2
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-09-27ARC: Allow SMP kernel to build/boot on UP-only infrastructureVineet Gupta1-1/+1
In light of recent SNAFU with SMP build, allow simple platform to build as SMP but run UP. * Remove the dependence on simulation SMP extension to enable quick build/test iterations of SMP kernel. * In absence of platform SMP registration, prevent the NULL smp feature name from borkign the system Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-07-23ARC: prune extra header includes from smp.cVineet Gupta1-8/+0
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-07-23ARC: [SMP] unify cpu private IRQ requests (TIMER/IPI)Vineet Gupta1-11/+4
The current cpu-private IRQ registration is ugly as it requires need to expose arch_unmask_irq() outside of intc code. So switch to percpu IRQ APIs: -request_percpu_irq [boot core] -enable_percpu_irq [all cores] Encapsulated in helper arc_request_percpu_irq() Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-06-26ARC: [SMP] Fix IPI IRQ registrationNoam Camus1-2/+13
Handle it just like timer. Current request_percpu_irq() would fail on non-boot cpus and thus IRQ will remian unmasked on those cpus. [vgupta: fix changelong] Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-06-03ARC: arc_local_timer_setup() need not pass own cpu idVineet Gupta1-1/+1
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-12-23ARC: [SMP] optimize IPI send and receiveVineet Gupta1-28/+40
* Don't send an IPI if receiver already has a pending IPI. Atomically piggyback the new msg with pending msg. * IPI receiver looping on xchg() not required References: https://lkml.org/lkml/2013/11/25/232 Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-12-23ARC: [SMP] simplify IPI codeVineet Gupta1-28/+28
* ipi_data is just a word, no need to keep it as struct * find_next_bit() not needed to loop thru a 32bit word, ffs suffices
2013-12-23ARC: [SMP] cpu halt interface doesn't need "self" cpu-idVineet Gupta1-5/+4
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-12-23ARC: [SMP] IPI ACK interface doesn't need "self" cpu-idVineet Gupta1-1/+1
The interface is confusing, it feels like we are getting "sender" info, whereas it is the "receiver", which can very well be retrived by smp_processor_id(), if need be. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-12-23ARC: [SMP] cpumask not needed in IPI send pathVineet Gupta1-9/+14
The current IPI sending callstack needlessly involves cpumask. arch_send_call_function_single_ipi(cpu) / smp_send_reschedule(cpu) ipi_send_msg(cpumask_of(cpu)) --> [cpu to cpumask] plat_smp_ops.ipi_send(callmap) for_each_cpu(callmap) --> [cpuask to cpu] do_plat_specific_ipi_PER_CPU Given that current backends are not capable of 1:N IPIs, lets simplify the interface for now, by keeping "a" cpu all along. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-07smp, ARC: kill SMP single function call interruptJiang Liu1-6/+1
Commit 9a46ad6d6df3b54 "smp: make smp_call_function_many() use logic similar to smp_call_function_single()" has unified the way to handle single and multiple cross-CPU function calls. Now only one interrupt is needed for architecture specific code to support generic SMP function call interfaces, so kill the redundant single function call interrupt. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] TLB flushVineet Gupta1-0/+1
- Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: use __weak instead of __attribute__((weak))Vineet Gupta1-1/+1
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-27arc: delete __cpuinit usage from all arc filesPaul Gortmaker1-2/+2
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/arc uses of the __cpuinit macros from all C files. Currently arc does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>