<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-dev/arch/arm64/include/asm/assembler.h, branch linus/master</title>
<subtitle>Linux kernel development work - see feature branches</subtitle>
<id>https://git.zx2c4.com/linux-dev/atom/arch/arm64/include/asm/assembler.h?h=linus%2Fmaster</id>
<link rel='self' href='https://git.zx2c4.com/linux-dev/atom/arch/arm64/include/asm/assembler.h?h=linus%2Fmaster'/>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/'/>
<updated>2022-03-14T19:08:31Z</updated>
<entry>
<title>Merge branch 'for-next/spectre-bhb' into for-next/core</title>
<updated>2022-03-14T19:08:31Z</updated>
<author>
<name>Will Deacon</name>
<email>will@kernel.org</email>
</author>
<published>2022-03-14T19:05:13Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=641d804157294d9b19bdfe6a2cdbd5d25939a048'/>
<id>urn:sha1:641d804157294d9b19bdfe6a2cdbd5d25939a048</id>
<content type='text'>
Merge in the latest Spectre mess to fix up conflicts with what was
already queued for 5.18 when the embargo finally lifted.

* for-next/spectre-bhb: (21 commits)
  arm64: Do not include __READ_ONCE() block in assembly files
  arm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting
  arm64: Use the clearbhb instruction in mitigations
  KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated
  arm64: Mitigate spectre style branch history side channels
  arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2
  arm64: Add percpu vectors for EL1
  arm64: entry: Add macro for reading symbol addresses from the trampoline
  arm64: entry: Add vectors that have the bhb mitigation sequences
  arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations
  arm64: entry: Allow the trampoline text to occupy multiple pages
  arm64: entry: Make the kpti trampoline's kpti sequence optional
  arm64: entry: Move trampoline macros out of ifdef'd section
  arm64: entry: Don't assume tramp_vectors is the start of the vectors
  arm64: entry: Allow tramp_alias to access symbols after the 4K boundary
  arm64: entry: Move the trampoline data page before the text page
  arm64: entry: Free up another register on kpti's tramp_exit path
  arm64: entry: Make the trampoline cleanup optional
  KVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A
  arm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit
  ...
</content>
</entry>
<entry>
<title>Revert "arm64: Mitigate MTE issues with str{n}cmp()"</title>
<updated>2022-03-07T21:57:02Z</updated>
<author>
<name>Joey Gouly</name>
<email>joey.gouly@arm.com</email>
</author>
<published>2022-03-01T10:14:35Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=e33c89256e66ba64ce5190c7f2c2741e619c6321'/>
<id>urn:sha1:e33c89256e66ba64ce5190c7f2c2741e619c6321</id>
<content type='text'>
This reverts commit 59a68d4138086c015ab8241c3267eec5550fbd44.

Now that the str{n}cmp functions have been updated to handle MTE
properly, the workaround to use the generic functions is no longer
needed.

Signed-off-by: Joey Gouly &lt;joey.gouly@arm.com&gt;
Cc: Robin Murphy &lt;robin.murphy@arm.com&gt;
Cc: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Will Deacon &lt;will@kernel.org&gt;
Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Link: https://lore.kernel.org/r/20220301101435.19327-4-joey.gouly@arm.com
Signed-off-by: Will Deacon &lt;will@kernel.org&gt;
</content>
</entry>
<entry>
<title>arm64: Use the clearbhb instruction in mitigations</title>
<updated>2022-02-24T14:02:44Z</updated>
<author>
<name>James Morse</name>
<email>james.morse@arm.com</email>
</author>
<published>2021-12-10T14:32:56Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=228a26b912287934789023b4132ba76065d9491c'/>
<id>urn:sha1:228a26b912287934789023b4132ba76065d9491c</id>
<content type='text'>
Future CPUs may implement a clearbhb instruction that is sufficient
to mitigate SpectreBHB. CPUs that implement this instruction, but
not CSV2.3 must be affected by Spectre-BHB.

Add support to use this instruction as the BHB mitigation on CPUs
that support it. The instruction is in the hint space, so it will
be treated by a NOP as older CPUs.

Reviewed-by: Russell King (Oracle) &lt;rmk+kernel@armlinux.org.uk&gt;
Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64: Mitigate spectre style branch history side channels</title>
<updated>2022-02-24T13:58:52Z</updated>
<author>
<name>James Morse</name>
<email>james.morse@arm.com</email>
</author>
<published>2021-11-10T14:48:00Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=558c303c9734af5a813739cd284879227f7297d2'/>
<id>urn:sha1:558c303c9734af5a813739cd284879227f7297d2</id>
<content type='text'>
Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.

The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.

For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.

For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64: entry: Add vectors that have the bhb mitigation sequences</title>
<updated>2022-02-16T13:16:08Z</updated>
<author>
<name>James Morse</name>
<email>james.morse@arm.com</email>
</author>
<published>2021-11-18T13:59:46Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=ba2689234be92024e5635d30fe744f4853ad97db'/>
<id>urn:sha1:ba2689234be92024e5635d30fe744f4853ad97db</id>
<content type='text'>
Some CPUs affected by Spectre-BHB need a sequence of branches, or a
firmware call to be run before any indirect branch. This needs to go
in the vectors. No CPU needs both.

While this can be patched in, it would run on all CPUs as there is a
single set of vectors. If only one part of a big/little combination is
affected, the unaffected CPUs have to run the mitigation too.

Create extra vectors that include the sequence. Subsequent patches will
allow affected CPUs to select this set of vectors. Later patches will
modify the loop count to match what the CPU requires.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
</content>
</entry>
<entry>
<title>arm64: Add macro version of the BTI instruction</title>
<updated>2021-12-14T18:12:58Z</updated>
<author>
<name>Mark Brown</name>
<email>broonie@kernel.org</email>
</author>
<published>2021-12-14T15:27:12Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=9be34be87cc8d1afe3c3bc2e645b4dee512d9eda'/>
<id>urn:sha1:9be34be87cc8d1afe3c3bc2e645b4dee512d9eda</id>
<content type='text'>
BTI is only available from v8.5 so we need to encode it using HINT in
generic code and for older toolchains. Add an assembler macro based on
one written by Mark Rutland which lets us use the mnemonic and update
the existing users.

Suggested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Acked-by: Will Deacon &lt;will@kernel.org&gt;
Signed-off-by: Mark Brown &lt;broonie@kernel.org&gt;
Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Link: https://lore.kernel.org/r/20211214152714.2380849-2-broonie@kernel.org
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
</entry>
<entry>
<title>Merge branch 'for-next/kexec' into for-next/core</title>
<updated>2021-10-29T11:24:47Z</updated>
<author>
<name>Will Deacon</name>
<email>will@kernel.org</email>
</author>
<published>2021-10-29T11:24:47Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=d8a2c0fba530f318c32e60310bc9df79fa54a14d'/>
<id>urn:sha1:d8a2c0fba530f318c32e60310bc9df79fa54a14d</id>
<content type='text'>
* for-next/kexec:
  arm64: trans_pgd: remove trans_pgd_map_page()
  arm64: kexec: remove cpu-reset.h
  arm64: kexec: remove the pre-kexec PoC maintenance
  arm64: kexec: keep MMU enabled during kexec relocation
  arm64: kexec: install a copy of the linear-map
  arm64: kexec: use ld script for relocation function
  arm64: kexec: relocate in EL1 mode
  arm64: kexec: configure EL2 vectors for kexec
  arm64: kexec: pass kimage as the only argument to relocation function
  arm64: kexec: Use dcache ops macros instead of open-coding
  arm64: kexec: skip relocation code for inplace kexec
  arm64: kexec: flush image and lists during kexec load time
  arm64: hibernate: abstract ttrb0 setup function
  arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
  arm64: kernel: add helper for booted at EL2 and not VHE
</content>
</entry>
<entry>
<title>arm64: extable: consolidate definitions</title>
<updated>2021-10-21T09:45:22Z</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2021-10-19T16:02:13Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=819771cc289226e392d5d45f1d162b47ace4eff6'/>
<id>urn:sha1:819771cc289226e392d5d45f1d162b47ace4eff6</id>
<content type='text'>
In subsequent patches we'll alter the structure and usage of struct
exception_table_entry. For inline assembly, we create these using the
`_ASM_EXTABLE()` CPP macro defined in &lt;asm/uaccess.h&gt;, and for plain
assembly code we use the `_asm_extable()` GAS macro defined in
&lt;asm/assembler.h&gt;, which are largely identical save for different
escaping and stringification requirements.

This patch moves the common definitions to a new &lt;asm/asm-extable.h&gt;
header, so that it's easier to keep the two in-sync, and to remove the
implication that these are only used for uaccess helpers (as e.g.
load_unaligned_zeropad() is only used on kernel memory, and depends upon
`_ASM_EXTABLE()`.

At the same time, a few minor modifications are made for clarity and in
preparation for subsequent patches:

* The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
  macro. This will make it easier to support different fixup variants in
  subsequent patches without needing to update all users of
  `_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
  variants of the macros are structurally identical.

  For the CPP macro, the stringification of fields is left to the
  wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
  necessary to stringify fields in wrapper macros to safely concatenate
  strings which cannot be token-pasted together in CPP.

* The fields of the structure are created separately on their own lines.
  This will make it easier to add/remove/modify individual fields
  clearly.

* Additional parentheses are added around the use of macro arguments in
  field definitions to avoid any potential problems with evaluation due
  to operator precedence, and to make errors upon misuse clearer.

* USER() is moved into &lt;asm/asm-uaccess.h&gt;, as it is not required by all
  assembly code, and is already refered to by comments in that file.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: James Morse &lt;james.morse@arm.com&gt;
Cc: Robin Murphy &lt;robin.murphy@arm.com&gt;
Cc: Will Deacon &lt;will@kernel.org&gt;
Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.com
Signed-off-by: Will Deacon &lt;will@kernel.org&gt;
</content>
</entry>
<entry>
<title>arm64: kexec: install a copy of the linear-map</title>
<updated>2021-10-01T12:31:00Z</updated>
<author>
<name>Pasha Tatashin</name>
<email>pasha.tatashin@soleen.com</email>
</author>
<published>2021-09-30T14:31:09Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=3744b5280e67f54579abe92576deec0079242323'/>
<id>urn:sha1:3744b5280e67f54579abe92576deec0079242323</id>
<content type='text'>
To perform the kexec relocation with the MMU enabled, we need a copy
of the linear map.

Create one, and install it from the relocation code. This has to be done
from the assembly code as it will be idmapped with TTBR0. The kernel
runs in TTRB1, so can't use the break-before-make sequence on the mapping
it is executing from.

The makes no difference yet as the relocation code runs with the MMU
disabled.

Suggested-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Pasha Tatashin &lt;pasha.tatashin@soleen.com&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Link: https://lore.kernel.org/r/20210930143113.1502553-12-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon &lt;will@kernel.org&gt;
</content>
</entry>
<entry>
<title>arm64: kexec: Use dcache ops macros instead of open-coding</title>
<updated>2021-10-01T12:31:00Z</updated>
<author>
<name>Pasha Tatashin</name>
<email>pasha.tatashin@soleen.com</email>
</author>
<published>2021-09-30T14:31:04Z</published>
<link rel='alternate' type='text/html' href='https://git.zx2c4.com/linux-dev/commit/?id=3036ec599332cdfb406249270e50ad3f1a5c5940'/>
<id>urn:sha1:3036ec599332cdfb406249270e50ad3f1a5c5940</id>
<content type='text'>
kexec does dcache maintenance when it re-writes all memory. Our
dcache_by_line_op macro depends on reading the sanitized DminLine
from memory. Kexec may have overwritten this, so open-codes the
sequence.

dcache_by_line_op is a whole set of macros, it uses dcache_line_size
which uses read_ctr for the sanitsed DminLine. Reading the DminLine
is the first thing the dcache_by_line_op does.

Rename dcache_by_line_op dcache_by_myline_op and take DminLine as
an argument. Kexec can now use the slightly smaller macro.

This makes up-coming changes to the dcache maintenance easier on
the eye.

Code generated by the existing callers is unchanged.

Suggested-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Pasha Tatashin &lt;pasha.tatashin@soleen.com&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Link: https://lore.kernel.org/r/20210930143113.1502553-7-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon &lt;will@kernel.org&gt;
</content>
</entry>
</feed>
