summaryrefslogtreecommitdiffstats
path: root/sys/arch/powerpc (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Add retguard to macppc kernel locore.S, ofwreal.S, setjmp.Sgkoehler2020-11-282-17/+31
| | | | | | | | | This changes RETGUARD_SETUP(ffs) to RETGUARD_SETUP(ffs, %r11, %r12) and RETGUARD_CHECK(ffs) to RETGUARD_CHECK(ffs, %r11, %r12) to show that r11 and r12 are in use between setup and check, and to pick registers other than r11 and r12 in some kernel functions. ok mortimer@ deraadt@
* uvm_grow() no longer needs the KERNEL_LOCK, bring it back to justderaadt2020-10-271-20/+27
| | | | | around uvm_fault(), and slightly refactor code to be more like on other architectures
* Retguard asm macros for powerpc libc, ld.sogkoehler2020-10-261-1/+42
| | | | | | | | | | Add retguard to some, but not all, asm functions in libc. Edit SYS.h in libc to remove the PREFIX macros and add SYSENTRY (more like aarch64 and powerpc64), so we can insert RETGUARD_SETUP after SYSENTRY. Some .S files in this commit don't get retguard, but do stop using the old prefix macros. Tested by deraadt@, who put this diff in a macppc snap.
* mi_ast() should not use the old cpu, but the cpu (after potential sleepderaadt2020-09-241-2/+2
| | | | | in refreshcreds() ok kettenis
* Only perform uvm_map_inentry() checks for PROC_SP for userland pagefaults.deraadt2020-09-241-5/+6
| | | | | | | | This should be sufficient for identifying pivoted ROP. Doing so for other traps is at best opportunistic for finding a straight-running ROP chain, but the added (and rare) sleeping point has proven to be dangerous. Discussed at length with kettenis and mortimer. ok mortimer kettenis mpi
* Include <sys/systm.h> directly instead of relying on hidden UVM includes.mpi2020-09-113-3/+6
| | | | The header is being pulled via db_machdep.h -> uvm_extern.h -> uvm_map.h
* Push KERNEL_LOCK/UNLOCK() dance inside trapsignal().mpi2020-08-191-15/+1
| | | | ok kettenis@, visa@
* do not need this one eitherderaadt2020-07-091-23/+0
|
* Add support for timeconting in userland.pirofti2020-07-061-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | This diff exposes parts of clock_gettime(2) and gettimeofday(2) to userland via libc eliberating processes from the need for a context switch everytime they want to count the passage of time. If a timecounter clock can be exposed to userland than it needs to set its tc_user member to a non-zero value. Tested with one or multiple counters per architecture. The timing data is shared through a pointer found in the new ELF auxiliary vector AUX_openbsd_timekeep containing timehands information that is frequently updated by the kernel. Timing differences between the last kernel update and the current time are adjusted in userland by the tc_get_timecount() function inside the MD usertc.c file. This permits a much more responsive environment, quite visible in browsers, office programs and gaming (apparently one is are able to fly in Minecraft now). Tested by robert@, sthen@, naddy@, kmos@, phessler@, and many others! OK from at least kettenis@, cheloha@, naddy@, sthen@
* Remove obsolete <machine/stdarg.h> header. Nowadays the varargvisa2020-06-301-51/+0
| | | | | | | | functionality is provided by <sys/stdarg.h> using compiler builtins. Tested in a ports bulk build on amd64 by naddy@ OK naddy@ mpi@
* Fix and harmonize some of the code dealing with address offsets encoded inkettenis2020-06-061-6/+6
| | | | | | instructions. ok drahn@, gkoehler@
* Implement cpu_rnd_messybits() as a read of the cycle counter register.naddy2020-06-051-2/+10
| | | | ok dlg@, powerpc/sparc64 ok kettenis@, sparc64/alpha tested by deraadt@
* introduce "cpu_rnd_messybits" for use instead of nanotime in dev/rnd.c.dlg2020-05-311-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | rnd.c uses nanotime to get access to some bits that change quickly between events that it can mix into the entropy pool. it doesn't use nanotime to get a monotonically increasing set or ordered and accurate timestamps, it just wants something with bits that change. there's been discussions for years about letting rnd use a clock that's super fast to read, but not necessarily accurate, but it wasn't until recently that i figured out it wasn't interested in time at all, so things like keeping a fast clock coherent between cpu cores or correct according to ntp is unecessary. this means we can just let rnd read the cycle counters on cpus and things will be fine. cpus with cycle counters that vary in their speed and arent kept consistent between cores may even be desirable in this context. so this is the first step in converting rnd.c to reading cycle counter. it copies the nanotime backend to each arch, and they can replace it with something MD as a second step later on. djm@ suggested rnd_messybytes, but we landed on cpu_rnd_messybits. thanks to visa for his eyes. ok deraadt@ visa@ deraadt@ says he will help handle any MD fallout that occurs.
* Retire <machine/varargs.h>.visa2020-05-271-57/+0
| | | | | | Nothing uses the header anymore. OK deraadt@ mpi@
* Decode the %{ds}(%{A}) operand of ld, std instructions.gkoehler2020-05-221-7/+20
| | | | | | | I don't expect to see these 64-bit instructions in 32-bit kernels, but I'm going to copy this code to powerpc64. ok drahn@
* Use '/t' on all architectures to get a trace via TID.mpi2020-05-141-2/+2
| | | | ok sthen@, patrick@
* Sync existing stacktrace_save() implementationsvisa2020-04-181-7/+1
| | | | | | | | Upgrade stacktrace_save() to stacktrace_save_at() on architectures where the latter is missing. Define stacktrace_save() as an inline function in header <sys/stacktrace.h> to reduce duplication of code. OK mpi@
* Switch powerpc to MI mplock implementation.mpi2020-04-153-29/+31
| | | | | | | | | | | | Reduce differences with others architectures and make it possible to use WITNESS on it. Rename & keep the current recursive lock implementation as it is used by the pmap. Tested by Peter J. Philipp, otto@ and cwen@. ok kettenis@
* Implement stacktrace_save_at() required for upcoming WITNESS.mpi2020-04-101-1/+38
| | | | ok gkoehler@
* Fix inline assembly in ppc_mftb(); using %L0 instead of %0+1 makes this workkettenis2020-03-171-2/+2
| | | | | | | for both gcc and clang. From NetBSD. Thanks to some serious detective work by ghoehler@. ok deraadt@, gkeohler@
* The 'lock spun out' db_printf needs a newline. All other MP_LOCKDEBUGclaudio2020-03-051-2/+2
| | | | | messages do have the newline already. OK anton@ kettenis@
* db_addr_t -> vaddr_tmpi2019-11-072-12/+12
|
* Substitute boolean_t/TRUE/FALSE by int/1/0.mpi2019-11-072-6/+6
|
* delete two decades of debugging code and further simplify the mainderaadt2019-09-061-263/+189
| | | | | trap() switch statement ok kettenis
* oops the label is actually out:deraadt2019-09-061-2/+2
|
* oops incorrect goto labelderaadt2019-09-061-2/+2
|
* If uvm_map_inentry returns false then a signal has been delivered, andderaadt2019-09-061-2/+3
| | | | | | userret() must be called on trap() exit to deliver it, rather than repeating the same cause infinitely. discovered by George Koehler ok kettenis bluhm visa
* Prepare the bat for kernels greater > 8MB of code, why because clang.deraadt2019-09-051-2/+7
| | | | ok kettenis
* some cleanup for clang; ok kettenisderaadt2019-09-031-8/+2
|
* Increment `db_active' before entering db_trap() like other archs do.mpi2019-07-201-1/+3
| | | | ok visa@
* Use "i" constrain instead of "n" constrain in inline assembly. Makes clangkettenis2019-07-111-3/+3
| | | | | | happy. ok visa@, mpi@
* I wrote the pc-page-writeable and sp-not-MAP_STACK code to be shared, andderaadt2019-07-091-2/+3
| | | | | | | | then ran into the messaging being poor. Then I fixed the messages. But there are two sub-cases of sp-not-MAP_STACK -- one at syscall time, and another at regular userland trap (on some architectures), and I bungled that messaging. Correct that now, while I look for yet another better way... discovered by millert, who ran an pre-MAP_STACK binary.
* Drop % from register name used for register variable since it makes clangkettenis2019-07-021-2/+2
| | | | | | unhappy. ok deraadt@, visa@
* Refactor the MAP_STACK feature, and introduce another similar variation:deraadt2019-06-011-19/+4
| | | | | | | | | Lookup the address that a syscall instruction is executed from, and kill the process if that page is writeable. This brings an aspect of W^X behaviour to W|X mappings (in JITs not yet adapted to W^X). The goal is to remove simple attack methods and force use of ret2libc or other more complicated means. ok kettenis stefan visa
* Use the debugger mutex for `ddb_mp_mutex'. This should prevent a racevisa2019-03-232-19/+17
| | | | | | | | | | that could leave `ddb_mp_mutex' locked if one CPU incremented `db_active' while another CPU was in the critical section. When the race hit, the debugger was unable to resume execution or switch between CPUs. Race analyzed by patrick@ OK mpi@ patrick@
* Add intr_{disable,restore}() for powerpc.visa2019-03-231-1/+13
| | | | OK mpi@ patrick@
* In pmap_page_protect(), zap the PTE before unlinking. At that point thekettenis2019-01-021-1/+8
| | | | | | | | | PTED_VA_MANAGED_M flag is still set so proper MOD/REF accounting will happen. Fixes memory corruption that would invariably happen when a machine started swapping. Giant cluestick from George Koehler. ok visa@, mpi@
* Include srp.h where struct cpu_info uses srp to avoid erroring out whenjsg2018-12-051-1/+2
| | | | | | | including cpu.h machine/intr.h etc without first including param.h when MULTIPROCESSOR is defined. ok visa@
* More "explicitely" -> "explicitly" in various comments.krw2018-10-221-2/+2
| | | | ok guenther@ tb@ deraadt@
* Unify and bump some of the NMBCLUSTERS defines. Some archs had it set toclaudio2018-09-141-2/+2
| | | | | | | | | 4MB which is far too low especially when the platform is able to run MP. New limits are, amd64 = 256M; arm64, mips64, sparc64 = 64M; alpha, arm, hppa, i386, powerpc = 32M; m88k, sh = 8M Still rather conservative numbers but much better than before. At least some hangs of arm64 build boxes was caused by this. OK kettenis@, visa@
* Remove unused spllock().visa2018-08-201-2/+1
| | | | OK deraadt@ mpi@
* Implement MAP_STACK option for mmap(). Synchronous faults (pagefault andderaadt2018-04-121-1/+19
| | | | | | | | | | | | | | syscall) confirm the stack register points at MAP_STACK memory, otherwise SIGSEGV is delivered. sigaltstack() and pthread_attr_setstack() are modified to create a MAP_STACK sub-region which satisfies alignment requirements. Observe that MAP_STACK can only be set/cleared by mmap(), which zeroes the contents of the region -- there is no mprotect() equivalent operation, so there is no MAP_STACK-adding gadget. This opportunistic software-emulation of a stack protection bit makes stack-pivot operations during ROPchain fragile (kind of like removing a tool from the toolbox). original discussion with tedu, uvm work by stefan, testing by mortimer ok kettenis
* Do not panic from ddb(4) when a lock requirement isn't fulfilled.mpi2018-03-201-4/+1
| | | | | | | | | | | Extend the logic already present for panic() to any DDB-related operation such that if ddb(4) is entered because of a fault or other trap it is still possible to call 'boot reboot'. While here stop printing splassert() messages as well, to not fill the buffer. ok visa@, deraadt@
* #define _MAX_PAGE_SHIFT in MD _types.h as the maximum pagesize an archderaadt2018-03-051-1/+2
| | | | | | | | | | needs (looking at you sgi, but others required this before). This is for the circumstances we need pagesize known at compile time, not getpagesize() runtime. Use it for malloc storage sizes, for shm, and to set pthread stack default sizes. The stack sizes were a mess, and pushing them towards page-aligned is healthy move (which will also be needed by the coming stack register checker) ok guenther kettenis, discussion with stefan
* Remove mutex implementations that now live in MI code.mpi2018-01-251-151/+0
|
* Move common mutex implementations to a MI place.mpi2018-01-252-86/+3
| | | | | | Archs not yet converted can to the jump by defining __USE_MI_MUTEX. ok visa@
* Include <sys/mutex.h> rather than <machine/mutex.h>mpi2018-01-221-2/+2
| | | | Required by upcoming MI mutex change.
* Define and use IPL_MPFLOOR in our common mutex implementation.mpi2018-01-132-3/+4
| | | | ok kettenis@, visa@
* Unify <machine/mutex.h> a bit further.mpi2018-01-121-7/+8
| | | | | | `mtx_owner' becomes the first field of 'struct mutex' on i386/amd64/arm64. ok visa@
* Add size for free.visa2018-01-111-2/+5
| | | | OK mpi@