Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch introduces Blackfin-specific bits to support the current
tip of the interrupt pipeline development, mainly:
- 2/3-level interrupt maps (sparse IRQs)
- generic virq handling
- sysinfo v2 format for ipipe_get_sysinfo()
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
No functional changes here.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
All chips converted.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Fixup the open coded access to irq_desc and use the proper wrappers.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Take advantage of more Blackfin-specific insns, and only initialize
registers required by the ABI.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
In order to safely work around anomaly 05000491, we have to execute IFLUSH
from L1 instruction sram. The trouble with multi-core systems is that all
L1 sram is visible only to the active core. So we can't just place the
functions into L1 and call it directly. We need to setup a jump table and
place the entry point in external memory. This will call the right func
based on the active core.
In the process, convert from the manual relocation of a small bit of code
into Core B's L1 to the more general framework we already have in place
for loading arbitrary pieces of code into L1.
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Re-use some of the existing cpu hotplugging code in the process.
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
We need to place icache flush funcs into L1 inst sram to work around a
hardware anomaly. But this currently breaks SMP support as the L1 inst
sram is per-core and cannot be called directly. So in preparation for
making that work, split the two options.
Further, split out the SMP depend so that we can allow some for SMP.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
The smp_processor_id() API requires that preempt be disabled when calling
it, so make sure it is when we go to send messages to other processors.
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Defer bfin_setup_caches(cpu) to avoid unexpected faults due to the cpu
state not yet being fully initialized.
Signed-off-by: steven miao <realmz6@gmail.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Common code checks the alignment of some of the variables and calls BUG()
if they aren't page aligned.
Signed-off-by: steven miao <realmz6@gmail.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
We use these insns when testing, so enable them by default for all
of our development boards.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Common code no longer defines this, so stop using it.
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
The board doesn't actually have a pin hooked up to do card detection,
so punt the code for it.
Signed-off-by: Andreas Schallenberg <Andreas.Schallenberg@3alitydigital.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Aaron Wu <aaronwu06@gmail.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
The BF54x lacks dedicated DMA channels for the UART peripherals and need
to be muxed between others. So add a kconfig option so people can select
which channels the UARTs will use so they can pick between SPORTs and the
less commonly used EPPI/PIXC peripherals.
Signed-off-by: steven miao <realmz6@gmail.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Since coreb_trampoline_start() calls coreb_start(), they need to be in
the same section.
Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Use the newly added smp_call_func_t in smp_call_function_interrupt for
the func variable, and make the comment above the WARN more assertive
and explicit. Also, func is a function pointer and does not need an
offset, so use %pf not %pS.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Mike Galbraith reported finding a lockup ("perma-spin bug") where the
cpumask passed to smp_call_function_many was cleared by other cpu(s)
while a cpu was preparing its call_data block, resulting in no cpu to
clear the last ref and unlock the block.
Having cpus clear their bit asynchronously could be useful on a mask of
cpus that might have a translation context, or cpus that need a push to
complete an rcu window.
Instead of adding a BUG_ON and requiring yet another cpumask copy, just
detect the race and handle it.
Note: arch_send_call_function_ipi_mask must still handle an empty
cpumask because the data block is globally visible before the that arch
callback is made. And (obviously) there are no guarantees to which cpus
are notified if the mask is changed during the call; only cpus that were
online and had their mask bit set during the whole call are guaranteed
to be called.
Reported-by: Mike Galbraith <efault@gmx.de>
Reported-by: Jan Beulich <JBeulich@novell.com>
Acked-by: Jan Beulich <jbeulich@novell.com>
Cc: stable@kernel.org
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Paul McKenney's review pointed out two problems with the barriers in the
2.6.38 update to the smp call function many code.
First, a barrier that would force the func and info members of data to
be visible before their consumption in the interrupt handler was
missing. This can be solved by adding a smp_wmb between setting the
func and info members and setting setting the cpumask; this will pair
with the existing and required smp_rmb ordering the cpumask read before
the read of refs. This placement avoids the need a second smp_rmb in
the interrupt handler which would be executed on each of the N cpus
executing the call request. (I was thinking this barrier was present
but was not).
Second, the previous write to refs (establishing the zero that we the
interrupt handler was testing from all cpus) was performed by a third
party cpu. This would invoke transitivity which, as a recient or
concurrent addition to memory-barriers.txt now explicitly states, would
require a full smp_mb().
However, we know the cpumask will only be set by one cpu (the data
owner) and any preivous iteration of the mask would have cleared by the
reading cpu. By redundantly writing refs to 0 on the owning cpu before
the smp_wmb, the write to refs will follow the same path as the writes
that set the cpumask, which in turn allows us to keep the barrier in the
interrupt handler a smp_rmb instead of promoting it to a smp_mb (which
will be be executed by N cpus for each of the possible M elements on the
list).
I moved and expanded the comment about our (ab)use of the rcu list
primitives for the concurrent walk earlier into this function. I
considered moving the first two paragraphs to the queue list head and
lock, but felt it would have been too disconected from the code.
Cc: Paul McKinney <paulmck@linux.vnet.ibm.com>
Cc: stable@kernel.org (2.6.32 and later)
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Peter pointed out there was nothing preventing the list_del_rcu in
smp_call_function_interrupt from running before the list_add_rcu in
smp_call_function_many.
Fix this by not setting refs until we have gotten the lock for the list.
Take advantage of the wmb in list_add_rcu to save an explicit additional
one.
I tried to force this race with a udelay before the lock & list_add and
by mixing all 64 online cpus with just 3 random cpus in the mask, but
was unsuccessful. Still, inspection shows a valid race, and the fix is
a extension of the existing protection window in the current code.
Cc: stable@kernel.org (v2.6.32 and later)
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Change the _mapcount value indicating PageBuddy from -2 to -128 for
more robusteness against page_mapcount() undeflows.
Use reset_page_mapcount instead of __ClearPageBuddy in bad_page to
ignore the previous retval of PageBuddy().
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This change supports building the kernel with newer binutils where
a shift of greater than the word size is no longer interpreted
silently as modulo the word size, but instead generates a warning.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
RPC task RPC_TASK_QUEUED bit is set must be checked before trying to wake up
task rpc_killall_tasks() because task->tk_waitqueue can not be set (equal to
NULL).
Also, as Trond Myklebust mentioned, such approach (instead of checking
tk_waitqueue to NULL) allows us to "optimise away the call to
rpc_wake_up_queued_task() altogether for those
tasks that aren't queued".
Here is an example of dereferencing of tk_waitqueue equal to NULL:
CPU 0 CPU 1 CPU 2
-------------------- --------------------- --------------------------
nfs4_run_open_task
rpc_run_task
rpc_execute
rpc_set_active
rpc_make_runnable
(waiting)
rpc_async_schedule
nfs4_open_prepare
nfs_wait_on_sequence
nfs_umount_begin
rpc_killall_tasks
rpc_wake_up_task
rpc_wake_up_queued_task
spin_lock(tk_waitqueue == NULL)
BUG()
rpc_sleep_on
spin_lock(&q->lock)
__rpc_sleep_on
task->tk_waitqueue = q
Signed-off-by: Stanislav Kinsbursky <skinsbursky@openvz.org>
Cc: stable@kernel.org
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
remove redundant check.
Signed-off-by: Jinqiu Yang <crindy646@gmail.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
This fixes a race in which the task->tk_callback() puts the rpc_task
to sleep, setting a new callback. Under certain circumstances, the current
code may end up executing the task->tk_action before it gets round to the
callback.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: stable@kernel.org
|
|
Commit 6440e5967bc broke old userspaces that do not set tss address
before entering vcpu. Unbreak it by setting tss address to a safe
value on the first vcpu entry. New userspaces should set tss address,
so print warning in case it doesn't.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
This patch does:
- call vcpu->arch.mmu.update_pte directly
- use gfn_to_pfn_atomic in update_pte path
The suggestion is from Avi.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Cleanup the code of pte_prefetch_gfn_to_memslot and mapping_level_dirty_bitmap
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
fix:
[ 3494.671786] stack backtrace:
[ 3494.671789] Pid: 10527, comm: qemu-system-x86 Not tainted 2.6.38-rc6+ #23
[ 3494.671790] Call Trace:
[ 3494.671796] [] ? lockdep_rcu_dereference+0x9d/0xa5
[ 3494.671826] [] ? kvm_memslots+0x6b/0x73 [kvm]
[ 3494.671834] [] ? gfn_to_memslot+0x16/0x4f [kvm]
[ 3494.671843] [] ? gfn_to_hva+0x16/0x27 [kvm]
[ 3494.671851] [] ? kvm_write_guest_page+0x31/0x83 [kvm]
[ 3494.671861] [] ? kvm_clear_guest_page+0x1a/0x1c [kvm]
[ 3494.671867] [] ? vmx_set_tss_addr+0x83/0x122 [kvm_intel]
and:
[ 8328.789599] stack backtrace:
[ 8328.789601] Pid: 18736, comm: qemu-system-x86 Not tainted 2.6.38-rc6+ #23
[ 8328.789603] Call Trace:
[ 8328.789609] [] ? lockdep_rcu_dereference+0x9d/0xa5
[ 8328.789621] [] ? kvm_memslots+0x6b/0x73 [kvm]
[ 8328.789628] [] ? gfn_to_memslot+0x16/0x4f [kvm]
[ 8328.789635] [] ? gfn_to_hva+0x16/0x27 [kvm]
[ 8328.789643] [] ? kvm_write_guest_page+0x31/0x83 [kvm]
[ 8328.789699] [] ? kvm_clear_guest_page+0x1a/0x1c [kvm]
[ 8328.789713] [] ? vmx_create_vcpu+0x316/0x3c8 [kvm_intel]
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
commit 387b9f97750444728962b236987fbe8ee8cc4f8c moved kvm_request_guest_time_update(vcpu),
breaking 32bit SMP guests using kvm-clock. Fix this by moving (new) clock update function
to proper place.
Signed-off-by: Nikola Ciprich <nikola.ciprich@linuxbox.cz>
Acked-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Currently if io port + len crosses 8bit boundary in io permission bitmap the
check may allow IO that otherwise should not be allowed. The patch fixes that.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Current implementation truncates upper 32bit of TR base address during IO
permission bitmap check. The patch fixes this.
Reported-and-tested-by: Francis Moreau <francis.moro@gmail.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
With CONFIG_CC_STACKPROTECTOR, we need a valid %gs at all times, so disable
lazy reload and do an eager reload immediately after the vmexit.
Reported-by: IVAN ANGELOV <ivangotoy@gmail.com>
Acked-By: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Access to this page is mostly done through the regs member which holds
the address to this page. The exceptions are in vmx_vcpu_reset() and
kvm_free_lapic() and these both can easily be converted to using regs.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
The RCU use in kvm_irqfd_deassign is tricky: we have rcu_assign_pointer
but no synchronize_rcu: synchronize_rcu is done by kvm_irq_routing_update
which we share a spinlock with.
Fix up a comment in an attempt to make this clearer.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
These macros are not used, so removed
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Using __get_free_page instead of alloc_page and page_address,
using free_page instead of __free_page and virt_to_page
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
No need to record the gfn to verifier the pte has the same mode as
current vcpu, it's because we only speculatively update the pte only
if the pte and vcpu have the same mode
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|