aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/xmon (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2017-04-24powerpc: Introduce a new helper to obtain function entry pointsNaveen N. Rao2-3/+44
kprobe_lookup_name() is specific to the kprobe subsystem and may not always return the function entry point (in a subsequent patch for KPROBES_ON_FTRACE). For looking up function entry points, introduce a separate helper and use it in optprobes.c Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-24powerpc/kprobes: Add support for KPROBES_ON_FTRACENaveen N. Rao6-2/+126
Allow kprobes to be placed on ftrace _mcount() call sites. This optimization avoids the use of a trap, by riding on ftrace infrastructure. This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on MPROFILE_KERNEL, which is only currently enabled on powerpc64le with newer toolchains. Based on the x86 code by Masami. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-24powerpc/ftrace: Restore LR from pt_regsNaveen N. Rao1-6/+7
Pass the real LR to the ftrace handler. This is needed for KPROBES_ON_FTRACE for the pre handlers. Also, with KPROBES_ON_FTRACE, the link register may be updated by the pre handlers or by a registed kretprobe. Honor updated LR by restoring it from pt_regs, rather than from the stack save area. Live patch and function graph continue to work fine with this change. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/kprobes: Blacklist common exception handlersNaveen N. Rao1-0/+1
Blacklist all the exception common/OOL handlers as the kernel stack is not yet setup, which means we can't take a trap at this point. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/kprobes: Blacklist exception handlersNaveen N. Rao3-0/+13
Introduce __head_end to mark end of the early fixed sections and use it to blacklist all exception handlers from kprobes. mpe: We do not need to do anything special for relocatable kernels, where the exception vectors are split from the main kernel, as the split vectors are already excluded by the check for kernel_text_address(). Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Move __head_end outside #ifdef 64-bit to unbreak the 32-bit build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/kprobes: Convert __kprobes to NOKPROBE_SYMBOL()Naveen N. Rao3-60/+81
Along similar lines as commit 9326638cbee2 ("kprobes, x86: Use NOKPROBE_SYMBOL() instead of __kprobes annotation"), convert __kprobes annotation to either NOKPROBE_SYMBOL() or nokprobe_inline. The latter forces inlining, in which case the caller needs to be added to NOKPROBE_SYMBOL(). Also: - blacklist arch_deref_entry_point(), and - convert a few regular inlines to nokprobe_inline in lib/sstep.c A key benefit is the ability to detect such symbols as being blacklisted. Before this patch: $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem $ perf probe read_mem Failed to write event: Invalid argument Error: Failed to add events. $ dmesg | tail -1 [ 3736.112815] Could not insert probe at _text+10014968: -22 After patch: $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem 0xc000000000072b50-0xc000000000072d20 read_mem $ perf probe read_mem read_mem is blacklisted function, skip it. Added new events: (null):(null) (on read_mem) probe:read_mem (on read_mem) You can now use it in all perf tools, such as: perf record -e probe:read_mem -aR sleep 1 $ grep " read_mem" /proc/kallsyms c000000000072b50 t read_mem c0000000005f3b40 t read_mem $ cat /sys/kernel/debug/kprobes/list c0000000005f3b48 k read_mem+0x8 [DISABLED] Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Minor change log formatting, fix up some conflicts] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/ftrace: Move stack setup and teardown code into ftrace_graph_caller()Naveen N. Rao1-4/+1
Move the stack setup and teardown code into ftrace_graph_caller(). This way, we don't incur the cost of setting it up unless function graph is enabled for this function. Also, remove the extraneous LR restore code after the function graph stub. LR has previously been restored and neither livepatch_handler() nor ftrace_graph_caller() return back here. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Drop bad change to non-mprofile-kernel version of ftrace_graph_caller] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/kprobes: Remove duplicate saving of MSRNaveen N. Rao1-1/+0
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove the redundant save. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Simplify POWER9 DD1 idle workaround codeNicholas Piggin2-11/+8
The idle workaround does not need to load PACATOC, and it does not need to be called within a nested function that requires LR to be saved. Load the PACATOC at entry to the idle wakeup. It does not matter which PACA this comes from, so it's okay to call before the workaround. Then apply the workaround to get the right PACA. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Idle POWER8 avoid full state loss recovery where possibleNicholas Piggin2-9/+112
If not all threads were in winkle, full state loss recovery is not necessary and can be avoided. A previous patch removed this optimisation due to some complexity with the implementation. Re-implement it by counting the number of threads in winkle with the per-core idle state. Only restore full state loss if all threads were in winkle. This has a small window of false positives right before threads execute winkle and just after they wake up, when the winkle count does not reflect the true number of threads in winkle. This is not a significant problem in comparison with even the minimum winkle duration. For correctness, a false positive is not a problem (only false negatives would be). Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Idle do not hold reservation longer than requiredNicholas Piggin1-9/+11
When taking the core idle state lock, grab it immediately like a regular lock, rather than adding more tests in there. Holding the lock keeps it stable, so there is no need to do it whole holding the reservation. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Expand core idle state bitsNicholas Piggin2-18/+19
In preparation for adding more bits to the core idle state word, move the lock bit up, and unlock by flipping the lock bit rather than masking off all but the thread bits. Add branch hints for atomic operations while we're here. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Fix POWER9 machine check handler from stop stateNicholas Piggin3-35/+70
The ISA specifies power save wakeup due to a machine check exception can cause a machine check interrupt (rather than the usual system reset interrupt). The machine check handler copes with this by doing low level machine check recovery without restoring full state from idle, then queues up a machine check event for logging, then directly executes the same idle instruction it woke from. This minimises the work done before recovery is performed. The problem is that it requires machine specific instructions and knowledge of the book3s idle code. Currently it only has code to handle POWER8 idle, so POWER9 crashes when trying to execute the P8 idle instructions which don't exist in ISAv3.0B. cpu 0x0: Vector: e40 (Emulation Assist) at [c0000000008f3810] pc: c000000000008380: machine_check_handle_early+0x130/0x2f0 lr: c00000000053a098: stop_loop+0x68/0xd0 sp: c0000000008f3a90 msr: 9000000000081001 current = 0xc0000000008a1080 paca = 0xc00000000ffd0000 softe: 0 irq_happened: 0x01 pid = 0, comm = swapper/0 Instead of going to sleep after recovery, do the usual idle wakeup and state restoration by calling into the normal idle wakeup path. This reuses the normal idle wakeup paths. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Reviewed-by: Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Use alternative feature patchingNicholas Piggin1-9/+14
This reduces the number of nops for POWER8. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Stop using bit in HSPRG0 to test winkleNicholas Piggin4-59/+15
The POWER8 idle code has a neat trick of programming the power on engine to restore a low bit into HSPRG0, so idle wakeup code can test and see if it has been programmed this way and therefore lost all state. Restore time can be reduced if winkle has not been reached. However this messes with our r13 PACA pointer, and requires HSPRG0 to be written to. It also optimizes the slowest and most uncommon case at the expense of another SPR write in the common nap state wakeup. Remove this complexity and assume winkle sleeps always require a state restore. This speedup could be made entirely contained within the winkle idle code by counting per-core winkles and setting a thread bitmap when all have gone to winkle. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Move remaining system reset idle code into idle_book3s.SNicholas Piggin2-46/+44
No functional change. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-23powerpc/64s: Remove unnecessary relocation branch from idle handlerNicholas Piggin2-9/+1
The system reset idle handler system_reset_idle_common is relocated, so relocation is not required to branch to kvm_start_guest. The superfluous relocation does not result in incorrect code, but it does not compile outside of exception-64s.S (with fixed section definitions). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-21powerpc/mm: Add support for runtime configuration of ASLR limitsMichael Ellerman2-5/+50
Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two sysctls that allow a user to configure the number of bits of randomness used for ASLR. Because of the way the Kconfig for ARCH_MMAP_RND_BITS is defined, we have to construct at least the MIN value in Kconfig, vs in a header which would be more natural. Given that we just go ahead and do it all in Kconfig. At least according to the code (the documentation makes no mention of it), the value is defined as the number of bits of randomisation *of the page*, not the address. This makes some sense, with larger page sizes more of the low bits are forced to zero, which would reduce the randomisation if we didn't take the PAGE_SIZE into account. However it does mean the min/max values have to change depending on the PAGE_SIZE in order to actually limit the amount of address space consumed by the randomisation. The result of that is that we have to define the default values based on both 32-bit vs 64-bit, but also the configured PAGE_SIZE. Furthermore now that we have 128TB address space support on Book3S, we also have to take that into account. Finally we can wire up the value in arch_mmap_rnd(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Tested-by: Bhupesh Sharma <bhsharma@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
2017-04-21powerpc/mm: Wire up ioremap_cache()Oliver O'Halloran1-0/+2
The default implementation of ioremap_cache() is aliased to ioremap(). On powerpc ioremap() creates cache-inhibited mappings by default which is almost certainly not what you wanted. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/kprobes: Emulate instructions on kprobe handler re-entryNaveen N. Rao1-0/+8
On kprobe handler re-entry, try to emulate the instruction rather than single stepping always. Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/kprobes: Factor out code to emulate instruction into a helperNaveen N. Rao1-21/+31
Factor out code to emulate instruction into a try_to_emulate() helper function. This makes no functional changes. Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/kretprobes: Override default function entry offsetNaveen N. Rao1-0/+9
With ABIv2, we offset 8 bytes into a function to get at the local entry point. mpe: NB this function is currently not called, the change to generic code to call it is being merged via the tip tree. Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/kprobes: Fix handling of function offsets on ABIv2Naveen N. Rao4-8/+9
commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling with kallsyms on ppc64le") changed how we use the offset field in struct kprobe on ABIv2. perf now offsets from the global entry point if an offset is specified and otherwise chooses the local entry point. Fix the same in kernel for kprobe API users. We do this by extending kprobe_lookup_name() to accept an additional parameter to indicate the offset specified with the kprobe registration. If offset is 0, we return the local function entry and return the global entry point otherwise. With: # cd /sys/kernel/debug/tracing/ # echo "p _do_fork" >> kprobe_events # echo "p _do_fork+0x10" >> kprobe_events before this patch: # cat ../kprobes/list c0000000000d0748 k _do_fork+0x8 [DISABLED] c0000000000d0758 k _do_fork+0x18 [DISABLED] c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED] and after: # cat ../kprobes/list c0000000000d04c8 k _do_fork+0x8 [DISABLED] c0000000000d04d0 k _do_fork+0x10 [DISABLED] c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED] Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20kprobes: Convert kprobe_lookup_name() to a functionNaveen N. Rao5-67/+69
The macro is now pretty long and ugly on powerpc. In the light of further changes needed here, convert it to a __weak variant to be over-ridden with a nicer looking function. Suggested-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20kprobes: Skip preparing optprobe if the probe is ftrace-basedMasami Hiramatsu1-2/+9
Skip preparing optprobe if the probe is ftrace-based, since anyway, it must not be optimized (or already optimized by ftrace). Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/64s: Use relon prolog for EXC_VIRT_OOL_MASKABLE_HV handlersNicholas Piggin1-1/+1
Hypervisor Virtualization and Directed Hypervisor Doorbell interrupt handlers use the macro EXC_VIRT_OOL_MASKABLE_HV for their relocation-on handlers, which calls MASKABLE_RELON_EXCEPTION_HV_OOL, which uses the *real mode* interrupt prolog. This means we needlessly rfid from virtual mode to virtual mode. For POWER8 it only affects doorbell IPIs. Context switch microbenchmark between threads with snooze disabled (which causes IPI) gets about 3% faster, about 370 cycles. Should be more important on POWER9 with global doorbells and HVI for host interrupts. Use the RELON variant instead to reduce overhead. Fixes: 1707dd1613 ("powerpc: Save CFAR before branching in interrupt entry paths") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fold some more detail into the change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-20powerpc/xive: Fix missing check of rc != OPAL_BUSYMichael Ellerman1-0/+1
Dan Carpenter noticed that the code in __xive_native_disable_queue() has a for loop with an unconditional break in the middle, which doesn't make a lot of sense. What the code's supposed to do is loop as long as OPAL says it's busy, if we get any other return code, either success or failure, then we should break the loop. So add the missing check. Fixes: 243e25112d06 ("powerpc/xive: Native exploitation of the XIVE interrupt controller") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/64s: Remove SAO feature from Power9 DD1Nicholas Piggin1-1/+2
Power9 DD1 does not implement SAO. Although it's not widely used, its presence or absence is visible to user space via arch_validate_prot() so it's moderately important that we get the value right. Fixes: 7dccfbc325bb ("powerpc/book3s: Add a cpu table entry for different POWER9 revs") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/64s: Remove ICSWX feature from Power9Nicholas Piggin2-2/+3
Power9 does not implement the icswx instruction. This CPU feature is not visible to userspace and is only used in the CONFIG_PPC_ICSWX code, which is generally not enabled, and can only be triggered by other code using icswx, which should not happen on Power9 systems in the first place. So impact should be minimal. Fixes: c3ab300ea5 ("powerpc: Add POWER9 cputable entry") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Add Power8 mem_access event to sysfsMadhavan Srinivasan2-0/+8
Patch add "mem_access" event to sysfs. This as-is not a raw event supported by Power8 pmu. Instead, it is formed based on raw event encoding specificed in isa207-common.h. Primary PMU event used here is PM_MRK_INST_CMPL. This event tracks only the completed marked instructions. Random sampling mode (MMCRA[SM]) with Random Instruction Sampling (RIS) is enabled to mark type of instructions. With Random sampling in RLS mode with PM_MRK_INST_CMPL event, the LDST /DATA_SRC fields in SIER identifies the memory hierarchy level (eg: L1, L2 etc) statisfied a data-cache miss for a marked instruction. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Support to export SIERs bit in Power9Madhavan Srinivasan1-0/+2
Patch to export SIER bits to userspace via perf_mem_data_src and perf_sample_data struct. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Support to export SIERs bit in Power8Madhavan Srinivasan1-0/+2
Patch to export SIER bits to userspace via perf_mem_data_src and perf_sample_data struct. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Support to export MMCRA[TEC*] field to userspaceMadhavan Srinivasan4-0/+23
Threshold feature when used with MMCRA [Threshold Event Counter Event], MMCRA[Threshold Start event] and MMCRA[Threshold End event] will update MMCRA[Threashold Event Counter Exponent] and MMCRA[Threshold Event Counter Multiplier] with the corresponding threshold event count values. Patch to export MMCRA[TECX/TECM] to userspace in 'weight' field of struct perf_sample_data. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Export memory hierarchy info to user spaceMadhavan Srinivasan4-1/+95
The LDST field and DATA_SRC in SIER identifies the memory hierarchy level (eg: L1, L2 etc), from which a data-cache miss for a marked instruction was satisfied. Use the 'perf_mem_data_src' object to export this hierarchy level to user space. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/perf: Define big-endian version of perf_mem_data_srcSukadev Bhattiprolu2-0/+32
perf_mem_data_src is a union that is initialized in the kernel via the ->val field and accessed by userspace via the mem_xxx bitfields. For this to work correctly on big endian platforms, we need a big-endian definition for the bitfields. Currently on a big endian system, if a user requests PERF_SAMPLE_DATA_SRC (perf report -d), they will get the default value from perf_sample_data_init(), which is PERF_MEM_NA. The value for PERF_MEM_NA is constructed using shifts: /* TLB access */ #define PERF_MEM_TLB_NA 0x01 /* not available */ ... #define PERF_MEM_TLB_SHIFT 26 #define PERF_MEM_S(a, s) \ (((__u64)PERF_MEM_##a##_##s) << PERF_MEM_##a##_SHIFT) #define PERF_MEM_NA (PERF_MEM_S(OP, NA) |\ PERF_MEM_S(LVL, NA) |\ PERF_MEM_S(SNOOP, NA) |\ PERF_MEM_S(LOCK, NA) |\ PERF_MEM_S(TLB, NA)) Which works out as: ((0x01 << 0) | (0x01 << 5) | (0x01 << 19) | (0x01 << 24) | (0x01 << 26)) Which means the PERF_MEM_NA value comes out of the kernel as 0x5080021 in CPU endian. But then in the perf tool, the code uses the bitfields to inspect the value, and currently the bitfields are defined using little endian ordering. So eg. in perf_mem__tlb_scnprintf() we see: data_src->val = 0x5080021 op = 0x0 lvl = 0x0 snoop = 0x0 lock = 0x0 dtlb = 0x0 rsvd = 0x5080021 Because of the way the perf tool code is written this is still displayed to the user as "N/A", so there is no bug visible at the UI level. Currently there are no big endian architectures which export a meaningful value (ie. other than PERF_MEM_NA), so the extent of the bug on big endian platforms is that the PERF_MEM_NA value is exported incorrectly as described above. Subsequent patches will add support on big endian powerpc for populating the data source value. This patch does a minimal fix of adding big endian definition of the bitfields to match the values that are already exported by the kernel on big endian. And it makes no change on little endian. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/iommu: Do not call PageTransHuge() on tail pagesAlexey Kardashevskiy1-2/+2
The CMA pages migration code does not support compound pages at the moment so it performs few tests before proceeding to actual page migration. One of the tests - PageTransHuge() - has VM_BUG_ON_PAGE(PageTail()) as it is designed to be called on head pages only. Since we also test for PageCompound(), and it contains PageTail() and PageHead(), we can simplify the check by leaving just PageCompound() and therefore avoid possible VM_BUG_ON_PAGE. Fixes: 2e5bbb5461f1 ("KVM: PPC: Book3S HV: Migrate pinned pages out of CMA") Cc: stable@vger.kernel.org # v4.9+ Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/mmap: Any hint > 128TB searches the full VA spaceAneesh Kumar K.V2-3/+6
As part of the new large address space support, processes start out life with a 128TB virtual address space. However when calling mmap() a process can pass a hint address, and if that hint is > 128TB the kernel will use the full 512TB address space to try and satisfy the mmap() request. Currently we have a check that the hint is > 128TB and < 512TB (TASK_SIZE), which was added as an optimisation to avoid updating addr_limit unnecessarily and also to avoid calling slice_flush_segments() on all CPUs more than necessary. However this has the user-visible side effect that an mmap() hint above 512TB does not search the full address space unless a preceding mmap() used a hint value > 128TB && < 512TB. So fix it to treat any hint above 128TB as a hint to search the full address space, instead of checking the hint against TASK_SIZE, we instead check if the addr_limit is already == TASK_SIZE. This also brings the ABI in-line with what is proposed on x86. ie, that a hint address above 128TB up to and including (2^64)-1 is an indication to search the full address space. Fixes: f4ea6dcb08ea2c (powerpc/mm: Enable mappings above 128TB) Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19cxl: Enable PCI device IDs for future IBM CXL adaptersMatthew R. Ochs1-0/+2
Add support for future IBM Coherent Accelerator (CXL) devices with an IDs of 0x0623 and 0x0628. Signed-off-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com> Signed-off-by: Uma Krishnan <ukrishn@linux.vnet.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/64s: Minor fix for MCE TLB flush for radixNicholas Piggin1-2/+6
The TLB flush for radix first flushes TLB for radix configuration, then flushes for hash configuration. The second flush is unnecessary but does not affect correctness. Fixes: 1a472c9dba6b9 ("powerpc/mm/radix: Add tlbflush routines") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/mm/radix: Use mm->task_size for boundary checking instead of addr_limitAneesh Kumar K.V4-10/+10
We don't init addr_limit correctly for 32 bit applications. So default to using mm->task_size for boundary condition checking. We use addr_limit to only control free space search. This makes sure that we do the right thing with 32 bit applications. We should consolidate the usage of TASK_SIZE/mm->task_size and mm->context.addr_limit later. This partially reverts commit fbfef9027c2a7ad (powerpc/mm: Switch some TASK_SIZE checks to use mm_context addr_limit). Fixes: fbfef9027c2a ("powerpc/mm: Switch some TASK_SIZE checks to use mm_context addr_limit") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-19powerpc/64s: Revert setting of LPCR[LPES] on POWER9Nicholas Piggin1-2/+2
The XIVE enablement patches included a change to set the LPES (Logical Partitioning Environment Selector) bit (bit # 3) in LPCR (Logical Partitioning Control Register) on POWER9 hosts. This bit sets external interrupts to guest delivery mode, which uses SRR0/1. The host's EE interrupt handler is written to expect HSRR0/1 (for earlier CPUs). This should be fine because XIVE is configured not to deliver EEs to the host (Hypervisor Virtulization Interrupt is used instead) so the EE handler should never be executed. However a bug in interrupt controller code, hardware, or odd configuration of a simulator could result in the host getting an EE incorrectly. Keeping the EE delivery mode matching the host EE handler prevents strange crashes due to using the wrong exception registers. KVM will configure the LPCR to set LPES prior to running a guest so that EEs are delivered to the guest using SRR0/1. Fixes: 08a1e650cc ("powerpc: Fixup LPCR:PECE and HEIC setting on POWER9") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Massage change log to avoid referring to LPES0 which is now renamed LPES] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc/pseries: Always enable SMP when building pseriesMichael Ellerman1-1/+2
The pseries platform supports Power4 and later CPUs, all of which are multithreaded and/or multicore. In practice no one ever builds a SMP=n kernel for these machines. So as we did for powernv, have the pseries platform imply SMP=y. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc/powernv: Always enable SMP when building powernvMichael Ellerman1-0/+1
The powernv platform supports Power7 and later CPUs, all of which are multithreaded and multicore. As such we never build a SMP=n kernel for those machines, other than possibly for debugging or running in a simulator. In the debugging case we can get a similar effect by booting with nr_cpus=1, or there's always the option of building a custom kernel with SMP hacked out. For running in simulators the code size reduction from building without SMP is not particularly important, what matters is the number of instructions executed. A quick test shows that a SMP=y kernel takes ~6% more instructions to boot to a shell. Booting with nr_cpus=1 recovers about half that deficit. On the flip side, keeping the SMP=n kernel building can be a pain at times. And although we've mostly kept it building in recent years, no one is regularly testing that the SMP=n kernel actually boots and works well on these machines. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc: Allow platforms to force-enable CONFIG_SMPMichael Ellerman1-1/+7
Of the 64-bit Book3S platforms, only powermac supports booting on an actual non-SMP system. The other platforms can be built with SMP disabled, but it doesn't make a lot of sense given the CPUs they support are all multicore or multithreaded. So give platforms the option of forcing SMP=y. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc: Drop include of linux/io.h from asm/io.hMichael Ellerman1-2/+0
Currently powerpc's asm/io.h includes linux/io.h, and linux/io.h includes asm/io.h. This can cause problems because depending on which is included first the order of definitions between the two files will change. The include of linux/io.h was added back in 2008 in commit b41e5fffe8b8 ("[POWERPC] devres: Add devm_ioremap_prot()"). It's not entirely clear it was needed then, but devm_ioremap_prot() has since been removed entirely as unused, in dedd24a12fe6 ("powerpc: Remove unused devm_ioremap_prot()"). So it seems to be unnecessary and can potentially cause problems, so remove the include of linux/io.h from asm/io.h Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc/powernv: POWER9 support for msgsnd/doorbell IPINicholas Piggin3-2/+31
POWER9 requires msgsync for receiver-side synchronization, and a DD1 workaround restricts IPIs to core-local. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Drop no longer needed asm feature macro changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc/64s: Avoid a branch for ppc_msgsndNicholas Piggin1-4/+2
IPIs are a pretty hot path and we already have the ability to do asm feature patching, so use it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Change log detail] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc: Introduce msgsnd/doorbell barrier primitivesNicholas Piggin4-5/+36
POWER9 changes requirements and adds new instructions for synchronization. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc: Change the doorbell IPI calling conventionNicholas Piggin14-79/+92
Change the doorbell callers to know about their msgsnd addressing, rather than have them set a per-cpu target data tag at boot that gets sent to the cause_ipi functions. The data is only used for doorbell IPI functions, no other IPI types, so it makes sense to keep that detail local to doorbell. Have the platform code understand doorbell IPIs, rather than the interrupt controller code understand them. Platform code can look at capabilities it has available and decide which to use. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-13powerpc/64s: Add SCV FSCR bit for ISA v3.0Nicholas Piggin2-0/+3
Add the bit definition and use it in facility_unavailable_exception() so we can intelligently report the cause if we take a fault for SCV. This doesn't actually enable SCV. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Drop whitespace changes to the existing entries, flush out change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>