aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/platforms (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-01-03Remove 'type' argument from access_ok() functionLinus Torvalds3-11/+11
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-29Merge tag 'kconfig-v4.21-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuildLinus Torvalds19-47/+47
Pull Kconfig file consolidation from Masahiro Yamada: "Consolidation of bus (PCI, PCMCIA, EISA, RapidIO) config entries by Christoph Hellwig. Currently, every architecture that wants to provide common peripheral busses needs to add some boilerplate code and include the right Kconfig files. This series instead just selects the presence (when needed) and then handles everything in the bus-specific Kconfig file under drivers/" * tag 'kconfig-v4.21-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: pcmcia: remove per-arch PCMCIA config entry eisa: consolidate EISA Kconfig entry in drivers/eisa rapidio: consolidate RAPIDIO config entry in drivers/rapidio pcmcia: allow PCMCIA support independent of the architecture PCI: consolidate the PCI_SYSCALL symbol PCI: consolidate the PCI_DOMAINS and PCI_DOMAINS_GENERIC config options PCI: consolidate PCI config entry in drivers/pci MIPS: remove the HT_PCI config option
2018-12-28Merge tag 'devicetree-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linuxLinus Torvalds1-0/+2
Pull Devicetree updates from Rob Herring: "The biggest highlight here is the start of using json-schema for DT bindings. Being able to validate bindings has been discussed for years with little progress. - Initial support for DT bindings using json-schema language. This is the start of converting DT bindings from free-form text to a structured format. - Reworking of initrd address initialization. This moves to using the phys address instead of virt addr in the DT parsing code. This rework was motivated by CONFIG_DEV_BLK_INITRD causing unnecessary rebuilding of lots of files. - Fix stale phandle entries in phandle cache - DT overlay validation improvements. This exposed several memory leak bugs which have been fixed. - Use node name and device_type helper functions in DT code - Last remaining conversions to using %pOFn printk specifier instead of device_node.name directly - Create new common RTC binding doc and move all trivial RTC devices out of trivial-devices.txt. - New bindings for Freescale MAG3110 magnetometer, Cadence Sierra PHY, and Xen shared memory - Update dtc to upstream version v1.4.7-57-gf267e674d145" * tag 'devicetree-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (68 commits) of: __of_detach_node() - remove node from phandle cache of: of_node_get()/of_node_put() nodes held in phandle cache gpio-omap.txt: add reg and interrupts properties dt-bindings: mrvl,intc: fix a trivial typo dt-bindings: iio: magnetometer: add dt-bindings for freescale mag3110 dt-bindings: Convert trivial-devices.txt to json-schema dt-bindings: arm: mrvl: amend Browstone compatible string dt-bindings: arm: Convert Tegra board/soc bindings to json-schema dt-bindings: arm: Convert ZTE board/soc bindings to json-schema dt-bindings: arm: Add missing Xilinx boards dt-bindings: arm: Convert Xilinx board/soc bindings to json-schema dt-bindings: arm: Convert VIA board/soc bindings to json-schema dt-bindings: arm: Convert ST STi board/soc bindings to json-schema dt-bindings: arm: Convert SPEAr board/soc bindings to json-schema dt-bindings: arm: Convert CSR SiRF board/soc bindings to json-schema dt-bindings: arm: Convert QCom board/soc bindings to json-schema dt-bindings: arm: Convert TI nspire board/soc bindings to json-schema dt-bindings: arm: Convert TI davinci board/soc bindings to json-schema dt-bindings: arm: Convert Calxeda board/soc bindings to json-schema dt-bindings: arm: Convert Altera board/soc bindings to json-schema ...
2018-12-28Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-5/+5
Merge misc updates from Andrew Morton: - large KASAN update to use arm's "software tag-based mode" - a few misc things - sh updates - ocfs2 updates - just about all of MM * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (167 commits) kernel/fork.c: mark 'stack_vm_area' with __maybe_unused memcg, oom: notify on oom killer invocation from the charge path mm, swap: fix swapoff with KSM pages include/linux/gfp.h: fix typo mm/hmm: fix memremap.h, move dev_page_fault_t callback to hmm hugetlbfs: Use i_mmap_rwsem to fix page fault/truncate race hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization memory_hotplug: add missing newlines to debugging output mm: remove __hugepage_set_anon_rmap() include/linux/vmstat.h: remove unused page state adjustment macro mm/page_alloc.c: allow error injection mm: migrate: drop unused argument of migrate_page_move_mapping() blkdev: avoid migration stalls for blkdev pages mm: migrate: provide buffer_migrate_page_norefs() mm: migrate: move migrate_page_lock_buffers() mm: migrate: lock buffers before migrate_page_move_mapping() mm: migration: factor out code to compute expected number of page references mm, page_alloc: enable pcpu_drain with zone capability kmemleak: add config to select auto scan mm/page_alloc.c: don't call kasan_free_pages() at deferred mem init ...
2018-12-28Merge tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2-3/+1
Pull DMA mapping updates from Christoph Hellwig: "A huge update this time, but a lot of that is just consolidating or removing code: - provide a common DMA_MAPPING_ERROR definition and avoid indirect calls for dma_map_* error checking - use direct calls for the DMA direct mapping case, avoiding huge retpoline overhead for high performance workloads - merge the swiotlb dma_map_ops into dma-direct - provide a generic remapping DMA consistent allocator for architectures that have devices that perform DMA that is not cache coherent. Based on the existing arm64 implementation and also used for csky now. - improve the dma-debug infrastructure, including dynamic allocation of entries (Robin Murphy) - default to providing chaining scatterlist everywhere, with opt-outs for the few architectures (alpha, parisc, most arm32 variants) that can't cope with it - misc sparc32 dma-related cleanups - remove the dma_mark_clean arch hook used by swiotlb on ia64 and replace it with the generic noncoherent infrastructure - fix the return type of dma_set_max_seg_size (Niklas Söderlund) - move the dummy dma ops for not DMA capable devices from arm64 to common code (Robin Murphy) - ensure dma_alloc_coherent returns zeroed memory to avoid kernel data leaks through userspace. We already did this for most common architectures, but this ensures we do it everywhere. dma_zalloc_coherent has been deprecated and can hopefully be removed after -rc1 with a coccinelle script" * tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits) dma-mapping: fix inverted logic in dma_supported dma-mapping: deprecate dma_zalloc_coherent dma-mapping: zero memory returned from dma_alloc_* sparc/iommu: fix ->map_sg return value sparc/io-unit: fix ->map_sg return value arm64: default to the direct mapping in get_arch_dma_ops PCI: Remove unused attr variable in pci_dma_configure ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled dma-mapping: bypass indirect calls for dma-direct vmd: use the proper dma_* APIs instead of direct methods calls dma-direct: merge swiotlb_dma_ops into the dma_direct code dma-direct: use dma_direct_map_page to implement dma_direct_map_sg dma-direct: improve addressability error reporting swiotlb: remove dma_mark_clean swiotlb: remove SWIOTLB_MAP_ERROR ACPI / scan: Refactor _CCA enforcement dma-mapping: factor out dummy DMA ops dma-mapping: always build the direct mapping code dma-mapping: move dma_cache_sync out of line dma-mapping: move various slow path functions out of line ...
2018-12-28mm: convert totalram_pages and totalhigh_pages variables to atomicArun KS1-5/+5
totalram_pages and totalhigh_pages are made static inline function. Main motivation was that managed_page_count_lock handling was complicating things. It was discussed in length here, https://lore.kernel.org/patchwork/patch/995739/#1181785 So it seemes better to remove the lock and convert variables to atomic, with preventing poteintial store-to-read tearing as a bonus. [akpm@linux-foundation.org: coding style fixes] Link: http://lkml.kernel.org/r/1542090790-21750-4-git-send-email-arunks@codeaurora.org Signed-off-by: Arun KS <arunks@codeaurora.org> Suggested-by: Michal Hocko <mhocko@suse.com> Suggested-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-24Merge branch 'next' of https://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linux into nextMichael Ellerman1-0/+17
Freescale updates from Scott: "Highlights include elimination of legacy clock bindings use from dts files, an 83xx watchdog handler, fixes to old dts interrupt errors, and some minor cleanup."
2018-12-22powerpc: Use of_node_name_eq for node name comparisonsRob Herring14-60/+36
Convert string compares of DT node names to use of_node_name_eq helper instead. This removes direct access to the node name pointer. A couple of open coded iterating thru the child node names are converted to use for_each_child_of_node() instead. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-22powerpc/pseries/pmem: Convert to %pOFn instead of device_node.nameRob Herring1-4/+4
In preparation to remove the node name pointer from struct device_node, convert printf users to use the %pOFn format specifier. pmem.c was recently added and missed the initial conversion. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-22powerpc/pseries: Fix node leak in update_lmb_associativity_index()Michael Ellerman1-0/+1
In update_lmb_associativity_index() we lookup dr_node using of_find_node_by_path() which takes a reference for us. In the non-error case we forget to drop the reference. Note that find_aa_index() does modify properties of the node, but doesn't need an extra reference held once it's returned. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/83xx: handle machine check caused by watchdog timerChristophe Leroy1-0/+17
When the watchdog timer is set in interrupt mode, it causes a machine check when it times out. The purpose of this mode is to ease debugging, not to crash the kernel and reboot the machine. This patch implements a special handling for that, in order to not crash the kernel if the watchdog times out while in interrupt or within the idle task. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [scottwood: added missing #include] Signed-off-by: Scott Wood <oss@buserror.net>
2018-12-21powerpc/powernv/npu: Fault user page into the hypervisor's pagetableAlexey Kardashevskiy1-6/+7
When a page fault happens in a GPU, the GPU signals the OS and the GPU driver calls the fault handler which populated a page table; this allows the GPU to complete an ATS request. On the bare metal get_user_pages() is enough as it adds a pte to the kernel page table but under KVM the partition scope tree does not get updated so ATS will still fail. This reads a byte from an effective address which causes HV storage interrupt and KVM updates the partition scope tree. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Check mmio_atsd array bounds when populatingAlexey Kardashevskiy1-2/+3
A broken device tree might contain more than 8 values and introduce hard to debug memory corruption bug. This adds the boundary check. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Add release_ownership hookAlexey Kardashevskiy1-0/+51
In order to make ATS work and translate addresses for arbitrary LPID and PID, we need to program an NPU with LPID and allow PID wildcard matching with a specific MSR mask. This implements a helper to assign a GPU to LPAR and program the NPU with a wildcard for PID and a helper to do clean-up. The helper takes MSR (only DR/HV/PR/SF bits are allowed) to program them into NPU2 for ATS checkout requests support. This exports pnv_npu2_unmap_lpar_dev() as following patches will use it from the VFIO driver. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Add compound IOMMU groupsAlexey Kardashevskiy3-136/+321
At the moment the powernv platform registers an IOMMU group for each PE. There is an exception though: an NVLink bridge which is attached to the corresponding GPU's IOMMU group making it a master. Now we have POWER9 systems with GPUs connected to each other directly bypassing PCI. At the moment we do not control state of these links so we have to put such interconnected GPUs to one IOMMU group which means that the old scheme with one GPU as a master won't work - there will be up to 3 GPUs in such group. This introduces a npu_comp struct which represents a compound IOMMU group made of multiple PEs - PCI PEs (for GPUs) and NPU PEs (for NVLink bridges). This converts the existing NVLink1 code to use the new scheme. >From now on, each PE must have a valid iommu_table_group_ops which will either be called directly (for a single PE group) or indirectly from a compound group handlers. This moves IOMMU group registration for NVLink-connected GPUs to npu-dma.c. For POWER8, this stores a new compound group pointer in the PE (so a GPU is still a master); for POWER9 the new group pointer is stored in an NPU (which is allocated per a PCI host controller). Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> [mpe: Initialise npdev to NULL in pnv_try_setup_npu_table_group()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_opsAlexey Kardashevskiy3-15/+34
At the moment NPU IOMMU is manipulated directly from the IODA2 PCI PE code; PCI PE acts as a master to NPU PE. Soon we will have compound IOMMU groups with several PEs from several different PHB (such as interconnected GPUs and NPUs) so there will be no single master but a one big IOMMU group. This makes a first step and converts an NPU PE with a set of extern function to a table group. This should cause no behavioral change. Note that pnv_npu_release_ownership() has never been implemented. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Move single TVE handling to NPU PEAlexey Kardashevskiy2-24/+11
Normal PCI PEs have 2 TVEs, one per a DMA window; however NPU PE has only one which points to one of two tables of the corresponding PCI PE. So whenever a new DMA window is programmed to PEs, the NPU PE needs to release old table in order to use the new one. Commit d41ce7b1bcc3e ("powerpc/powernv/npu: Do not try invalidating 32bit table when 64bit table is enabled") did just that but in pci-ioda.c while it actually belongs to npu-dma.c. This moves the single TVE handling to npu-dma.c. This does not implement restoring though as it is highly unlikely that we can set the table to PCI PE and cannot to NPU PE and if that fails, we could only set 32bit table to NPU PE and this configuration is not really supported or wanted. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv: Reference iommu_table while it is linked to a groupAlexey Kardashevskiy2-5/+2
The iommu_table pointer stored in iommu_table_group may get stale by accident, this adds referencing and removes a redundant comment about this. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/iommu_api: Move IOMMU groups setup to a single placeAlexey Kardashevskiy1-14/+68
Registering new IOMMU groups and adding devices to them are separated in code and the latter is dug in the DMA setup code which it does not really belong to. This moved IOMMU groups setup to a separate helper which registers a group and adds devices as before. This does not make a difference as IOMMU groups are not used anyway; the only dependency here is that iommu_add_device() requires a valid pointer to an iommu_table (set by set_iommu_table_base()). To keep the old behaviour, this does not add new IOMMU groups for PEs with no DMA weight and also skips NVLink bridges which do not have pci_controller_ops::setup_bridge (the normal way of adding PEs). Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/pseries: Rework device adding to IOMMU groupsAlexey Kardashevskiy3-32/+67
The powernv platform registers IOMMU groups and adds devices to them from the pci_controller_ops::setup_bridge() hook except one case when virtual functions (SRIOV VFs) are added from a bus notifier. The pseries platform registers IOMMU groups from the pci_controller_ops::dma_bus_setup() hook and adds devices from the pci_controller_ops::dma_dev_setup() hook. The very same bus notifier used for powernv does not add devices for pseries though as __of_scan_bus() adds devices first, then it does the bus/dev DMA setup. Both platforms use iommu_add_device() which takes a device and expects it to have a valid IOMMU table struct with an iommu_table_group pointer which in turn points the iommu_group struct (which represents an IOMMU group). Although the helper seems easy to use, it relies on some pre-existing device configuration and associated data structures which it does not really need. This simplifies iommu_add_device() to take the table_group pointer directly. Pseries already has a table_group pointer handy and the bus notified is not used anyway. For powernv, this copies the existing bus notifier, makes it work for powernv only which means an easy way of getting to the table_group pointer. This was tested on VFs but should also support physical PCI hotplug. Since iommu_add_device() receives the table_group pointer directly, pseries does not do TCE cache invalidation (the hypervisor does) nor allow multiple groups per a VFIO container (in other words sharing an IOMMU table between partitionable endpoints), this removes iommu_table_group_link from pseries. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/pseries: Remove IOMMU API support for non-LPAR systemsAlexey Kardashevskiy1-7/+2
The pci_dma_bus_setup_pSeries and pci_dma_dev_setup_pSeries hooks are registered for the pseries platform which does not have FW_FEATURE_LPAR; these would be pre-powernv platforms which we never supported PCI pass through for anyway so remove it. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/pseries/npu: Enable platform supportAlexey Kardashevskiy1-0/+22
We already changed NPU API for GPUs to not to call OPAL and the remaining bit is initializing NPU structures. This searches for POWER9 NVLinks attached to any device on a PHB and initializes an NPU structure if any found. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculationAlexey Kardashevskiy1-1/+32
We might have memory@ nodes with "linux,usable-memory" set to zero (for example, to replicate powernv's behaviour for GPU coherent memory) which means that the memory needs an extra initialization but since it can be used afterwards, the pseries platform will try mapping it for DMA so the DMA window needs to cover those memory regions too; if the window cannot cover new memory regions, the memory onlining fails. This walks through the memory nodes to find the highest RAM address to let a huge DMA window cover that too in case this memory gets onlined later. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv/npu: Move OPAL calls away from context manipulationAlexey Kardashevskiy3-54/+74
When introduced, the NPU context init/destroy helpers called OPAL which enabled/disabled PID (a userspace memory context ID) filtering in an NPU per a GPU; this was a requirement for P9 DD1.0. However newer chip revision added a PID wildcard support so there is no more need to call OPAL every time a new context is initialized. Also, since the PID wildcard support was added, skiboot does not clear wildcard entries in the NPU so these remain in the hardware till the system reboot. This moves LPID and wildcard programming to the PE setup code which executes once during the booting process so NPU2 context init/destroy won't need to do additional configuration. This replaces the check for FW_FEATURE_OPAL with a check for npu!=NULL as this is the way to tell if the NPU support is present and configured. This moves pnv_npu2_init() declaration as pseries should be able to use it. This keeps pnv_npu2_map_lpar() in powernv as pseries is not allowed to call that. This exports pnv_npu2_map_lpar_dev() as following patches will use it from the VFIO driver. While at it, replace redundant list_for_each_entry_safe() with a simpler list_for_each_entry(). Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv: Move npu struct from pnv_phb to pci_controllerAlexey Kardashevskiy3-35/+57
The powernv PCI code stores NPU data in the pnv_phb struct. The latter is referenced by pci_controller::private_data. We are going to have NPU2 support in the pseries platform as well but it does not store any private_data in in the pci_controller struct; and even if it did, it would be a different data structure. This makes npu a pointer and stores it one level higher in the pci_controller struct. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2Alexey Kardashevskiy1-0/+10
The skiboot firmware has a hot reset handler which fences the NVIDIA V100 GPU RAM on Witherspoons and makes accesses no-op instead of throwing HMIs: https://github.com/open-power/skiboot/commit/fca2b2b839a67 Now we are going to pass V100 via VFIO which most certainly involves KVM guests which are often terminated without getting a chance to offline GPU RAM so we end up with a running machine with misconfigured memory. Accessing this memory produces hardware management interrupts (HMI) which bring the host down. To suppress HMIs, this wires up this hot reset hook to vfio_pci_disable() via pci_disable_device() which switches NPU2 to a safe mode and prevents HMIs. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Acked-by: Alistair Popple <alistair@popple.id.au> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc: generate uapi header and system call table filesFiroz Khan1-14/+3
System call table generation script must be run to gener- ate unistd_32/64.h and syscall_table_32/64/c32/spu.h files. This patch will have changes which will invokes the script. This patch will generate unistd_32/64.h and syscall_table- _32/64/c32/spu.h files by the syscall table generation script invoked by parisc/Makefile and the generated files against the removed files must be identical. The generated uapi header file will be included in uapi/- asm/unistd.h and generated system call table header file will be included by kernel/systbl.S file. Signed-off-by: Firoz Khan <firoz.khan@linaro.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/fadump: Do not allow hot-remove memory from fadump reserved area.Mahesh Salgaonkar1-2/+5
For fadump to work successfully there should not be any holes in reserved memory ranges where kernel has asked firmware to move the content of old kernel memory in event of crash. Now that fadump uses CMA for reserved area, this memory area is now not protected from hot-remove operations unless it is cma allocated. Hence, fadump service can fail to re-register after the hot-remove operation, if hot-removed memory belongs to fadump reserved region. To avoid this make sure that memory from fadump reserved area is not hot-removable if fadump is registered. However, if user still wants to remove that memory, he can do so by manually stopping fadump service before hot-remove operation. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv: Move opal_power_control_init() call in opal_init().Mahesh Salgaonkar2-2/+4
opal_power_control_init() depends on opal message notifier to be initialized, which is done in opal_init()->opal_message_init(). But both these initialization are called through machine initcalls and it all depends on in which order they being called. So far these are called in correct order (may be we got lucky) and never saw any issue. But it is clearer to control initialization order explicitly by moving opal_power_control_init() into opal_init(). Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/4xx: Delete an unnecessary return statement in two functionsMarkus Elfring2-3/+0
The script "checkpatch.pl" pointed information out like the following. WARNING: void function return statements are not generally useful Thus remove such a statement in the affected functions. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/4xx: Delete error message for a ENOMEM in two functionsMarkus Elfring1-4/+1
Omit an extra message for a memory allocation failure in these functions. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/4xx: Use seq_putc() in ocm_debugfs_show()Markus Elfring1-1/+1
A single character (line break) should be put into a sequence. Thus use the corresponding function "seq_putc". This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/4xx: Combine four seq_printf() calls into two in ocm_debugfs_show()Markus Elfring1-6/+2
Some data were printed into a sequence by four separate function calls. Print the same data by two single function calls instead. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-21powerpc/powernv: Remove PCI_MSI ifdef checksOliver O'Halloran3-17/+0
CONFIG_PCI_MSI was made mandatory by commit a311e738b6d8 ("powerpc/powernv: Make PCI non-optional") so the #ifdef checks around CONFIG_PCI_MSI here can be removed entirely. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/powernv/eeh/npu: Fix uninitialized variables in opal_pci_eeh_freeze_statusAlexey Kardashevskiy3-8/+8
The current implementation of the OPAL_PCI_EEH_FREEZE_STATUS call in skiboot's NPU driver does not touch the pci_error_type parameter so it might have garbage but the powernv code analyzes it nevertheless. This initializes pcierr and fstate to zero in all call sites. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/powernv/ioda: Reduce a number of hooks in pnv_phbAlexey Kardashevskiy2-10/+3
fixup_phb() is never used, this removes it. pick_m64_pe() and reserve_m64_pe() are always defined for all powernv PHBs: they are initialized by pnv_ioda_parse_m64_window() which is called unconditionally from pnv_pci_init_ioda_phb() which initializes all known PHB types on powernv so we can open code them. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/powernv/ioda1: Remove dead code for a single device PEAlexey Kardashevskiy1-9/+1
At the moment PNV_IODA_PE_DEV is only used for NPU PEs which are not present on IODA1 machines (i.e. POWER7) so let's remove a piece of dead code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/powernv/npu: Remove unused headers and a macro.Alexey Kardashevskiy1-13/+0
The macro and few headers are not used so remove them. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Acked-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/powernv/ioda: Allocate indirect TCE levels of cached userspace addresses on demandAlexey Kardashevskiy1-1/+1
The powernv platform maintains 2 TCE tables for VFIO - a hardware TCE table and a table with userspace addresses; the latter is used for marking pages dirty when corresponging TCEs are unmapped from the hardware table. a68bd1267b72 ("powerpc/powernv/ioda: Allocate indirect TCE levels on demand") enabled on-demand allocation of the hardware table, however it missed the other table so it has still been fully allocated at the boot time. This fixes the issue by allocating a single level, just like we do for the hardware table. Fixes: a68bd1267b72 ("powerpc/powernv/ioda: Allocate indirect TCE levels on demand") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/pasemi: Add Nemo board IRQ initroutineDarren Stevens2-0/+26
Add a IRQ init routine for the Nemo board which inits and attatches the i8259 found in the SB600, and a cascade routine to dispatch the interrupts. Signed-off-by: Darren Stevens <darren@stevens-zone.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/pasemi: Add Nemo board device init code.Darren Stevens1-0/+34
Add routines for Nemo specific devices to init at boot time, these being board level power-off and SB600's rtc. Also add a run time variable to prevent these being activated if we boot on a reference board. Signed-off-by: Darren Stevens <darren@stevens-zone.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/pasemi: Add Nemo board IRQ initroutineDarren Stevens1-0/+37
Add a IRQ init routine for the Nemo board which inits and attatches the i8259 found in the SB600, and a cascade routine to dispatch the interrupts. Signed-off-by: Darren Stevens <darren@stevens-zone.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/pasemi: Add PCI initialisation for Nemo board.Darren Stevens1-0/+55
The A-Eon Amigaone X1000's Nemo motherboard has an AMD SB600 connected to one of the PCI-e root ports on its PaSemi Pwrficient 1628M SoC. Normally the SB600 southbridge would be connected to a hidden PCI-e port on the system's northbridge, and as a result doesn't fully comply with the PCI-e spec. Add code to relax the PCI-e detection in both the root port and the Linux kernel allowing on board devices to be detected. Signed-off-by: Darren Stevens <darren@stevens-zone.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc: use mm zones more sensiblyChristoph Hellwig2-19/+0
Powerpc has somewhat odd usage where ZONE_DMA is used for all memory on common 64-bit configfs, and ZONE_DMA32 is used for 31-bit schemes. Move to a scheme closer to what other architectures use (and I dare to say the intent of the system): - ZONE_DMA: optionally for memory < 31-bit (64-bit embedded only) - ZONE_NORMAL: everything addressable by the kernel - ZONE_HIGHMEM: memory > 32-bit for 32-bit kernels Also provide information on how ZONE_DMA is used by defining ARCH_ZONE_DMA_BITS. Contains various fixes from Benjamin Herrenschmidt. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc/dma: split the two __dma_alloc_coherent implementationsChristoph Hellwig1-1/+1
The implemementation for the CONFIG_NOT_COHERENT_CACHE case doesn't share any code with the one for systems with coherent caches. Split it off and merge it with the helpers in dma-noncoherent.c that have no other callers. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-20powerpc: allow NOT_COHERENT_CACHE for amigaoneChristoph Hellwig1-1/+2
AMIGAONE selects NOT_COHERENT_CACHE, so we better allow it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-19powerpc/smp: Use code patching to restore reset vectorChristophe Leroy2-4/+2
Instead of hardcoding reset vector restore, use patch_instruction() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-17Merge branch 'fixes' into nextMichael Ellerman2-11/+31
Merge our fixes branch again, this has a couple of build fixes and also a change to do_syscall_trace_enter() that will conflict with a patch we want to apply in next.
2018-12-09powerpc/papr_scm: Use ibm,unit-guid as the iset cookieOliver O'Halloran1-0/+12
The interleave set cookie is used to determine if a label stored in the metadata space should be applied to the current region. This is important in the case of NVDIMMs since the firmware may change the interleaving configuration of a DIMM which would invalidate the existing labels. In our case the hypervisor hides those details from us so we don't really care, but libnvdimm still requires the interleave set cookie to be non-zero. For our purposes we just need the set cookie to be unique and fixed for a given PAPR SCM region and using the unit-guid (really a UUID) is fine for this purpose. Fixes: b5beae5e224f ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Use kernel types (u64)] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-09powerpc/papr_scm: Fix DIMM device registration raceOliver O'Halloran1-0/+3
When a new nvdimm device is registered with libnvdimm via nvdimm_create() it is added as a device on the nvdimm bus. The probe function for the DIMM driver is potentially quite slow so actually registering and probing the device is done in an async domain rather than immediately after device creation. This can result in a race where the region device (created 2nd) is probed first and fails to activate at boot. To fix this we use the same approach as the ACPI/NFIT driver which is to check that all the DIMM devices registered successfully. LibNVDIMM provides the nvdimm_bus_count_dimms() function which synchronises with the async domain and verifies that the dimm was successfully registered with the bus. If either of these does not occur then we bail. Fixes: b5beae5e224f ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>