aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/boot/compressed/pgtable_64.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-04-17x86/boot: Add an efi.h header for the decompressorBorislav Petkov1-2/+1
Copy the needed symbols only and remove the kernel proper includes. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/YlCKWhMJEMUgJmjF@zn.tnic
2021-09-25lib/string: Move helper functions out of string.cKees Cook1-0/+2
The core functions of string.c are those that may be implemented by per-architecture functions, or overloaded by FORTIFY_SOURCE. As a result, it needs to be built with __NO_FORTIFY. Without this, macros will collide with function declarations. This was accidentally working due to -ffreestanding (on some architectures). Make this deterministic by explicitly setting __NO_FORTIFY and move all the helper functions into string_helpers.c so that they gain the fortification coverage they had been missing. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Andy Lavr <andy.lavr@gmail.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Bartosz Golaszewski <bgolaszewski@baylibre.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2020-10-25treewide: Convert macro and uses of __section(foo) to __section("foo")Joe Perches1-4/+4
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences. Remove the quote operator # from compiler_attributes.h __section macro. Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms. Conversion done using the script at: https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl Signed-off-by: Joe Perches <joe@perches.com> Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com> Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-19x86/boot/64: Initialize 5-level paging variables earlierArvind Sankar1-0/+16
Commit ca0e22d4f011 ("x86/boot/compressed/64: Always switch to own page table") started using a new set of pagetables even without KASLR. After that commit, initialize_identity_maps() is called before the 5-level paging variables are setup in choose_random_location(), which will not work if 5-level paging is actually enabled. Fix this by moving the initialization of __pgtable_l5_enabled, pgdir_shift and ptrs_per_p4d into cleanup_trampoline(), which is called immediately after the finalization of whether the kernel is executing with 4- or 5-level paging. This will be earlier than anything that might require those variables, and keeps the 4- vs 5-level paging code all in one place. Fixes: ca0e22d4f011 ("x86/boot/compressed/64: Always switch to own page table") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Joerg Roedel <jroedel@suse.de> Tested-by: Joerg Roedel <jroedel@suse.de> Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/20201010191110.4060905-1-nivedita@alum.mit.edu
2020-10-01x86/asm: Replace __force_order with a memory clobberArvind Sankar1-9/+0
The CRn accessor functions use __force_order as a dummy operand to prevent the compiler from reordering CRn reads/writes with respect to each other. The fact that the asm is volatile should be enough to prevent this: volatile asm statements should be executed in program order. However GCC 4.9.x and 5.x have a bug that might result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x, may reorder volatile asm statements with respect to each other. There are some issues with __force_order as implemented: - It is used only as an input operand for the write functions, and hence doesn't do anything additional to prevent reordering writes. - It allows memory accesses to be cached/reordered across write functions, but CRn writes affect the semantics of memory accesses, so this could be dangerous. - __force_order is not actually defined in the kernel proper, but the LLVM toolchain can in some cases require a definition: LLVM (as well as GCC 4.9) requires it for PIE code, which is why the compressed kernel has a definition, but also the clang integrated assembler may consider the address of __force_order to be significant, resulting in a reference that requires a definition. Fix this by: - Using a memory clobber for the write functions to additionally prevent caching/reordering memory accesses across CRn writes. - Using a dummy input operand with an arbitrary constant address for the read functions, instead of a global variable. This will prevent reads from being reordered across writes, while allowing memory loads to be cached/reordered across CRn reads, which should be safe. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/ Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2019-08-27x86/boot/compressed/64: Fix missing initialization in find_trampoline_placement()Kirill A. Shutemov1-1/+1
Gustavo noticed that 'new' can be left uninitialized if 'bios_start' happens to be less or equal to 'entry->addr + entry->size'. Initialize the variable at the begin of the iteration to the current value of 'bios_start'. Fixes: 0a46fff2f910 ("x86/boot/compressed/64: Fix boot on machines with broken E820 table") Reported-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190826133326.7cxb4vbmiawffv2r@box
2019-08-19x86/boot/compressed/64: Fix boot on machines with broken E820 tableKirill A. Shutemov1-3/+10
BIOS on Samsung 500C Chromebook reports very rudimentary E820 table that consists of 2 entries: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] usable BIOS-e820: [mem 0x00000000fffff000-0x00000000ffffffff] reserved It breaks logic in find_trampoline_placement(): bios_start lands on the end of the first 4k page and trampoline start gets placed below 0. Detect underflow and don't touch bios_start for such cases. It makes kernel ignore E820 table on machines that doesn't have two usable pages below BIOS_START_MAX. Fixes: 1b3a62643660 ("x86/boot/compressed/64: Validate trampoline placement against E820") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://bugzilla.kernel.org/show_bug.cgi?id=203463 Link: https://lkml.kernel.org/r/20190813131654.24378-1-kirill.shutemov@linux.intel.com
2019-07-18x86/boot/compressed/64: Remove unused variableZhenzhong Duan1-1/+0
Fix gcc warning: arch/x86/boot/compressed/pgtable_64.c: In function 'find_trampoline_placement': arch/x86/boot/compressed/pgtable_64.c:43:16: warning: unused variable 'trampoline_start' [-Wunused-variable] unsigned long trampoline_start; ^ Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/1563283040-31101-1-git-send-email-zhenzhong.duan@oracle.com
2019-02-28x86/boot/compressed/64: Do not read legacy ROM on EFI systemKirill A. Shutemov1-3/+16
EFI systems do not necessarily provide a legacy ROM. If the ROM is missing the memory is not mapped at all. Trying to dereference values in the legacy ROM area leads to a crash on Macbook Pro. Only look for values in the legacy ROM area for non-EFI system. Fixes: 3548e131ec6a ("x86/boot/compressed/64: Find a place for 32-bit trampoline") Reported-by: Pitam Mitra <pitamm@gmail.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Bockjoo Kim <bockjoo@phys.ufl.edu> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190219075224.35058-1-kirill.shutemov@linux.intel.com Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=202351
2018-08-02x86/boot/compressed/64: Validate trampoline placement against E820Kirill A. Shutemov1-18/+55
There were two report of boot failure cased by trampoline placed into a reserved memory region. It can happen on machines that don't report EBDA correctly. Fix the problem by re-validating the found address against the E820 table. If the address is in a reserved area, find the next usable region below the initial address. Fixes: 3548e131ec6a ("x86/boot/compressed/64: Find a place for 32-bit trampoline") Reported-by: Dmitry Malkin <d.malkin@real-time-systems.com> Reported-by: youling 257 <youling257@gmail.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180801133225.38121-1-kirill.shutemov@linux.intel.com
2018-05-19x86/mm: Introduce the 'no5lvl' kernel parameterKirill A. Shutemov1-2/+10
This kernel parameter allows to force kernel to use 4-level paging even if hardware and kernel support 5-level paging. The option may be useful to work around regressions related to 5-level paging. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Hugh Dickins <hughd@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180518103528.59260-5-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-05-19x86/boot/compressed/64: Fix trampoline page table address calculationKirill A. Shutemov1-1/+1
Hugh noticied that we calculate the address of the trampoline page table incorrectly in cleanup_trampoline(). TRAMPOLINE_32BIT_PGTABLE_OFFSET has to be divided by sizeof(unsigned long), since trampoline_32bit is an 'unsigned long' pointer. TRAMPOLINE_32BIT_PGTABLE_OFFSET is zero so the bug doesn't have a visible effect. Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: e9d0e6330eb8 ("x86/boot/compressed/64: Prepare new top-level page table for trampoline") Link: http://lkml.kernel.org/r/20180518103528.59260-2-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-05-16x86/boot/compressed/64: Fix moving page table out of trampoline memoryKirill A. Shutemov1-11/+3
cleanup_trampoline() relocates the top-level page table out of trampoline memory. We use 'top_pgtable' as our new top-level page table. But if the 'top_pgtable' would be referenced from C in a usual way, the address of the table will be calculated relative to RIP. After kernel gets relocated, the address will be in the middle of decompression buffer and the page table may get overwritten. This leads to a crash. We calculate the address of other page tables relative to the relocation address. It makes them safe. We should do the same for 'top_pgtable'. Calculate the address of 'top_pgtable' in assembly and pass down to cleanup_trampoline(). Move the page table to .pgtable section where the rest of page tables are. The section is @nobits so we save 4k in kernel image. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Hugh Dickins <hughd@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: e9d0e6330eb8 ("x86/boot/compressed/64: Prepare new top-level page table for trampoline") Link: http://lkml.kernel.org/r/20180516080131.27913-3-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-12x86/boot/compressed/64: Prepare new top-level page table for trampolineKirill A. Shutemov1-0/+61
If trampoline code would need to switch between 4- and 5-level paging modes, we have to use a page table in trampoline memory. Having it in trampoline memory guarantees that it's below 4G and we can point CR3 to it from 32-bit trampoline code. We only use the page table if the desired paging mode doesn't match the mode we are in. Otherwise the page table is unused and trampoline code wouldn't touch CR3. For 4- to 5-level paging transition, we set up current (4-level paging) CR3 as the first and the only entry in a new top-level page table. For 5- to 4-level paging transition, copy page table pointed by first entry in the current top-level page table as our new top-level page table. If the page table is used by trampoline we would need to copy it to new page table outside trampoline and update CR3 before restoring trampoline memory. Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180226180451.86788-6-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-12x86/boot/compressed/64: Set up trampoline memoryKirill A. Shutemov1-0/+7
This patch clears up trampoline memory and copies trampoline code in place. It's not yet used though. Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180226180451.86788-5-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-12x86/boot/compressed/64: Save and restore trampoline memoryKirill A. Shutemov1-0/+13
The memory area we found for trampoline shouldn't contain anything useful. But let's preserve the data anyway. Just to be on safe side. paging_prepare() would save the data into a buffer. cleanup_trampoline() would restore it back once we are done with the trampoline. Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180226180451.86788-4-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-12x86/boot/compressed/64: Find a place for 32-bit trampolineKirill A. Shutemov1-0/+34
If a bootloader enables 64-bit mode with 4-level paging, we might need to switch over to 5-level paging. The switching requires the disabling of paging, which works fine if kernel itself is loaded below 4G. But if the bootloader puts the kernel above 4G (not sure if anybody does this), we would lose control as soon as paging is disabled, because the code becomes unreachable to the CPU. To handle the situation, we need a trampoline in lower memory that would take care of switching on 5-level paging. This patch finds a spot in low memory for a trampoline. The heuristic is based on code in reserve_bios_regions(). We find the end of low memory based on BIOS and EBDA start addresses. The trampoline is put just before end of low memory. It's mimic approach taken to allocate memory for realtime trampoline. Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180226180451.86788-3-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-12x86/boot/compressed/64: Describe the logic behind the LA57 checkKirill A. Shutemov1-3/+15
The patch explains the LA57 check in more details. Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180226180451.86788-2-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-02-11x86/boot/compressed/64: Introduce paging_prepare()Kirill A. Shutemov1-13/+12
Rename l5_paging_required() to paging_prepare() and change the interface of the function. This is a preparation for the next patch, which would make the function also allocate memory for the 32-bit trampoline. The function now returns a 128-bit structure. RAX would return trampoline memory address (zero for now) and RDX would indicate if we need to enable 5-level paging. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> [ Typo fixes and general clarification. ] Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@suse.de> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180209142228.21231-3-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-07x86/boot/compressed/64: Detect and handle 5-level paging at boot-timeKirill A. Shutemov1-0/+28
Prerequisite for fixing the current problem of instantaneous reboots when a 5-level paging kernel is booted on 4-level paging hardware. At the same time this change prepares the decompression code to boot-time switching between 4- and 5-level paging. [ tglx: Folded the GCC < 5 fix. ] Fixes: 77ef56e4f0fb ("x86: Enable 5-level paging support via CONFIG_X86_5LEVEL=y") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <ak@linux.intel.com> Cc: stable@vger.kernel.org Cc: Andy Lutomirski <luto@amacapital.net> Cc: linux-mm@kvack.org Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lkml.kernel.org/r/20171204124059.63515-2-kirill.shutemov@linux.intel.com