aboutsummaryrefslogtreecommitdiffstats
path: root/arch/s390/defconfig
diff options
context:
space:
mode:
authorHeiko Carstens <heiko.carstens@de.ibm.com>2016-12-28 11:33:48 +0100
committerMartin Schwidefsky <schwidefsky@de.ibm.com>2017-01-16 07:27:48 +0100
commite991c24d68b8c0ba297eeb7af80b1e398e98c33f (patch)
tree0de08c08b2ecdf7b9a22683ac8cff66331ae7782 /arch/s390/defconfig
parentLinux 4.10-rc4 (diff)
downloadlinux-dev-e991c24d68b8c0ba297eeb7af80b1e398e98c33f.tar.xz
linux-dev-e991c24d68b8c0ba297eeb7af80b1e398e98c33f.zip
s390/ctl_reg: make __ctl_load a full memory barrier
We have quite a lot of code that depends on the order of the __ctl_load inline assemby and subsequent memory accesses, like e.g. disabling lowcore protection and the writing to lowcore. Since the __ctl_load macro does not have memory barrier semantics, nor any other dependencies the compiler is, theoretically, free to shuffle code around. Or in other words: storing to lowcore could happen before lowcore protection is disabled. In order to avoid this class of potential bugs simply add a full memory barrier to the __ctl_load macro. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions