aboutsummaryrefslogtreecommitdiffstats
path: root/init (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-02-28crypto: arm64/chacha - fix chacha_4block_xor_neon() for big endianEric Biggers1-0/+16
The change to encrypt a fifth ChaCha block using scalar instructions caused the chacha20-neon, xchacha20-neon, and xchacha12-neon self-tests to start failing on big endian arm64 kernels. The bug is that the keystream block produced in 32-bit scalar registers is directly XOR'd with the data words, which are loaded and stored in native endianness. Thus in big endian mode the data bytes end up XOR'd with the wrong bytes. Fix it by byte-swapping the keystream words in big endian mode. Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: sha512/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA512 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: sha256/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA256 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: ccree - add missing inline qualifierGilad Ben-Yossef1-1/+1
Commit 1358c13a48c4 ("crypto: ccree - fix resume race condition on init") was missing a "inline" qualifier for stub function used when CONFIG_PM is not set causing a build warning. Fixes: 1358c13a48c4 ("crypto: ccree - fix resume race condition on init") Cc: stable@kernel.org # v4.20 Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: ccree - fix resume race condition on initGilad Ben-Yossef3-10/+13
We were enabling autosuspend, which is using data set by the hash module, prior to the hash module being inited, casuing a crash on resume as part of the startup sequence if the race was lost. This was never a real problem because the PM infra was using low res timers so we were always winning the race, until commit 8234f6734c5d ("PM-runtime: Switch autosuspend over to using hrtimers") changed that :-) Fix this by seperating the PM setup and enablement and doing the latter only at the end of the init sequence. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: stable@kernel.org # v4.20 Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-25crypto: cavium/nitrox - Invoke callback after DMA unmapNagadheeraj Rottela1-4/+6
In process_response_list() invoke the callback handler after unmapping the DMA buffers. It ensures DMA data is synced form device to cpu before the client code access the data from callback handler. Fixes: c9613335bf4f ("crypto: cavium/nitrox - Added AEAD cipher support") Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: sm3 - fix undefined shift by >= width of valueEric Biggers1-1/+1
sm3_compress() calls rol32() with shift >= 32, which causes undefined behavior. This is easily detected by enabling CONFIG_UBSAN. Explicitly AND with 31 to make the behavior well defined. Fixes: 4f0fc1600edb ("crypto: sm3 - add OSCCA SM3 secure hash") Cc: <stable@vger.kernel.org> # v4.15+ Cc: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: talitos - fix ablkcipher for CONFIG_VMAP_STACKChristophe Leroy1-1/+4
[ 2.364486] WARNING: CPU: 0 PID: 60 at ./arch/powerpc/include/asm/io.h:837 dma_nommu_map_page+0x44/0xd4 [ 2.373579] CPU: 0 PID: 60 Comm: cryptomgr_test Tainted: G W 4.20.0-rc5-00560-g6bfb52e23a00-dirty #531 [ 2.384740] NIP: c000c540 LR: c000c584 CTR: 00000000 [ 2.389743] REGS: c95abab0 TRAP: 0700 Tainted: G W (4.20.0-rc5-00560-g6bfb52e23a00-dirty) [ 2.400042] MSR: 00029032 <EE,ME,IR,DR,RI> CR: 24042204 XER: 00000000 [ 2.406669] [ 2.406669] GPR00: c02f2244 c95abb60 c6262990 c95abd80 0000256a 00000001 00000001 00000001 [ 2.406669] GPR08: 00000000 00002000 00000010 00000010 24042202 00000000 00000100 c95abd88 [ 2.406669] GPR16: 00000000 c05569d4 00000001 00000010 c95abc88 c0615664 00000004 00000000 [ 2.406669] GPR24: 00000010 c95abc88 c95abc88 00000000 c61ae210 c7ff6d40 c61ae210 00003d68 [ 2.441559] NIP [c000c540] dma_nommu_map_page+0x44/0xd4 [ 2.446720] LR [c000c584] dma_nommu_map_page+0x88/0xd4 [ 2.451762] Call Trace: [ 2.454195] [c95abb60] [82000808] 0x82000808 (unreliable) [ 2.459572] [c95abb80] [c02f2244] talitos_edesc_alloc+0xbc/0x3c8 [ 2.465493] [c95abbb0] [c02f2600] ablkcipher_edesc_alloc+0x4c/0x5c [ 2.471606] [c95abbd0] [c02f4ed0] ablkcipher_encrypt+0x20/0x64 [ 2.477389] [c95abbe0] [c02023b0] __test_skcipher+0x4bc/0xa08 [ 2.483049] [c95abe00] [c0204b60] test_skcipher+0x2c/0xcc [ 2.488385] [c95abe20] [c0204c48] alg_test_skcipher+0x48/0xbc [ 2.494064] [c95abe40] [c0205cec] alg_test+0x164/0x2e8 [ 2.499142] [c95abf00] [c0200dec] cryptomgr_test+0x48/0x50 [ 2.504558] [c95abf10] [c0039ff4] kthread+0xe4/0x110 [ 2.509471] [c95abf40] [c000e1d0] ret_from_kernel_thread+0x14/0x1c [ 2.515532] Instruction dump: [ 2.518468] 7c7e1b78 7c9d2378 7cbf2b78 41820054 3d20c076 8089c200 3d20c076 7c84e850 [ 2.526127] 8129c204 7c842e70 7f844840 419c0008 <0fe00000> 2f9e0000 54847022 7c84fa14 [ 2.533960] ---[ end trace bf78d94af73fe3b8 ]--- [ 2.539123] talitos ff020000.crypto: master data transfer error [ 2.544775] talitos ff020000.crypto: TEA error: ISR 0x20000000_00000040 [ 2.551625] alg: skcipher: encryption failed on test 1 for ecb-aes-talitos: ret=22 IV cannot be on stack when CONFIG_VMAP_STACK is selected because the stack cannot be DMA mapped anymore. This patch copies the IV into the extended descriptor. Fixes: 4de9d0b547b9 ("crypto: talitos - Add ablkcipher algorithms") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: talitos - reorder code in talitos_edesc_alloc()Christophe Leroy1-18/+7
This patch moves the mapping of IV after the kmalloc(). This avoids having to unmap in case kmalloc() fails. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: adiantum - initialize crypto_spawn::instEric Biggers1-0/+4
crypto_grab_*() doesn't set crypto_spawn::inst, so templates must set it beforehand. Otherwise it will be left NULL, which causes a crash in certain cases where algorithms are dynamically loaded/unloaded. E.g. with CONFIG_CRYPTO_CHACHA20_X86_64=m, the following caused a crash: insmod chacha-x86_64.ko python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))' rmmod chacha-x86_64.ko python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))' Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: cavium/nitrox - Use after free in process_response_list()Dan Carpenter1-1/+1
We free "sr" and then dereference it on the next line. Fixes: c9613335bf4f ("crypto: cavium/nitrox - Added AEAD cipher support") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: authencesn - Avoid twice completion call in decrypt pathHarsh Jain1-1/+1
Authencesn template in decrypt path unconditionally calls aead_request_complete after ahash_verify which leads to following kernel panic in after decryption. [ 338.539800] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 [ 338.548372] PGD 0 P4D 0 [ 338.551157] Oops: 0000 [#1] SMP PTI [ 338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G W I 4.19.7+ #13 [ 338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10 [ 338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4] [ 338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff <8b> 04 25 04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b [ 338.598547] RSP: 0018:ffff911c97803c00 EFLAGS: 00010246 [ 338.604268] RAX: 0000000000000002 RBX: ffff911c4469ee00 RCX: 0000000000000000 [ 338.612090] RDX: 0000000000000000 RSI: 0000000000000130 RDI: ffff911b87c20400 [ 338.619874] RBP: 0000000000000000 R08: ffff911b87c20498 R09: 000000000000000a [ 338.627610] R10: 0000000000000001 R11: 0000000000000004 R12: 0000000000000000 [ 338.635402] R13: ffff911c89590000 R14: ffff911c91730000 R15: 0000000000000000 [ 338.643234] FS: 0000000000000000(0000) GS:ffff911c97800000(0000) knlGS:0000000000000000 [ 338.652047] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 338.658299] CR2: 0000000000000004 CR3: 00000001ec20a000 CR4: 00000000000006f0 [ 338.666382] Call Trace: [ 338.669051] <IRQ> [ 338.671254] esp_input_done+0x12/0x20 [esp4] [ 338.675922] chcr_handle_resp+0x3b5/0x790 [chcr] [ 338.680949] cpl_fw6_pld_handler+0x37/0x60 [chcr] [ 338.686080] chcr_uld_rx_handler+0x22/0x50 [chcr] [ 338.691233] uldrx_handler+0x8c/0xc0 [cxgb4] [ 338.695923] process_responses+0x2f0/0x5d0 [cxgb4] [ 338.701177] ? bitmap_find_next_zero_area_off+0x3a/0x90 [ 338.706882] ? matrix_alloc_area.constprop.7+0x60/0x90 [ 338.712517] ? apic_update_irq_cfg+0x82/0xf0 [ 338.717177] napi_rx_handler+0x14/0xe0 [cxgb4] [ 338.722015] net_rx_action+0x2aa/0x3e0 [ 338.726136] __do_softirq+0xcb/0x280 [ 338.730054] irq_exit+0xde/0xf0 [ 338.733504] do_IRQ+0x54/0xd0 [ 338.736745] common_interrupt+0xf/0xf Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...") Signed-off-by: Harsh Jain <harsh@chelsio.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: caam - fix SHA support detectionHoria Geantă3-1/+11
The addition of Chacha20 + Poly1305 authenc support inadvertently broke detection of algorithms supported by MDHA (Message Digest Hardware Accelerator), fix it. Fixes: d6bbd4eea243 ("crypto: caam/jr - add support for Chacha20 + Poly1305") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: caam - fix zero-length buffer DMA mappingAymen Sghaier1-6/+9
Recent changes - probably DMA API related (generic and/or arm64-specific) - exposed a case where driver maps a zero-length buffer: ahash_init()->ahash_update()->ahash_final() with a zero-length string to hash kernel BUG at kernel/dma/swiotlb.c:475! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP Modules linked in: CPU: 2 PID: 1823 Comm: cryptomgr_test Not tainted 4.20.0-rc1-00108-g00c9fe37a7f2 #1 Hardware name: LS1046A RDB Board (DT) pstate: 80000005 (Nzcv daif -PAN -UAO) pc : swiotlb_tbl_map_single+0x170/0x2b8 lr : swiotlb_map_page+0x134/0x1f8 sp : ffff00000f79b8f0 x29: ffff00000f79b8f0 x28: 0000000000000000 x27: ffff0000093d0000 x26: 0000000000000000 x25: 00000000001f3ffe x24: 0000000000200000 x23: 0000000000000000 x22: 00000009f2c538c0 x21: ffff800970aeb410 x20: 0000000000000001 x19: ffff800970aeb410 x18: 0000000000000007 x17: 000000000000000e x16: 0000000000000001 x15: 0000000000000019 x14: c32cb8218a167fe8 x13: ffffffff00000000 x12: ffff80097fdae348 x11: 0000800976bca000 x10: 0000000000000010 x9 : 0000000000000000 x8 : ffff0000091fd6c8 x7 : 0000000000000000 x6 : 00000009f2c538bf x5 : 0000000000000000 x4 : 0000000000000001 x3 : 0000000000000000 x2 : 00000009f2c538c0 x1 : 00000000f9fff000 x0 : 0000000000000000 Process cryptomgr_test (pid: 1823, stack limit = 0x(____ptrval____)) Call trace: swiotlb_tbl_map_single+0x170/0x2b8 swiotlb_map_page+0x134/0x1f8 ahash_final_no_ctx+0xc4/0x6cc ahash_final+0x10/0x18 crypto_ahash_op+0x30/0x84 crypto_ahash_final+0x14/0x1c __test_hash+0x574/0xe0c test_hash+0x28/0x80 __alg_test_hash+0x84/0xd0 alg_test_hash+0x78/0x144 alg_test.part.30+0x12c/0x2b4 alg_test+0x3c/0x68 cryptomgr_test+0x44/0x4c kthread+0xfc/0x128 ret_from_fork+0x10/0x18 Code: d34bfc18 2a1a03f7 1a9f8694 35fff89a (d4210000) Cc: <stable@vger.kernel.org> Signed-off-by: Aymen Sghaier <aymen.sghaier@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: ccree - convert to use crypto_authenc_extractkeys()Eric Biggers1-21/+19
Convert the ccree crypto driver to use crypto_authenc_extractkeys() so that it picks up the fix for broken validation of rtattr::rta_len. Fixes: ff27e85a85bb ("crypto: ccree - add AEAD support") Cc: <stable@vger.kernel.org> # v4.17+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: bcm - convert to use crypto_authenc_extractkeys()Eric Biggers2-31/+14
Convert the bcm crypto driver to use crypto_authenc_extractkeys() so that it picks up the fix for broken validation of rtattr::rta_len. This also fixes the DES weak key check to actually be done on the right key. (It was checking the authentication key, not the encryption key...) Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver") Cc: <stable@vger.kernel.org> # v4.11+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10crypto: authenc - fix parsing key with misaligned rta_lenEric Biggers1-3/+11
Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte 'enckeylen', followed by an authentication key and an encryption key. crypto_authenc_extractkeys() parses the key to find the inner keys. However, it fails to consider the case where the rtattr's payload is longer than 4 bytes but not 4-byte aligned, and where the key ends before the next 4-byte aligned boundary. In this case, 'keylen -= RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This causes a buffer overread and crash during crypto_ahash_setkey(). Fix it by restricting the rtattr payload to the expected size. Reproducer using AF_ALG: #include <linux/if_alg.h> #include <linux/rtnetlink.h> #include <sys/socket.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "aead", .salg_name = "authenc(hmac(sha256),cbc(aes))", }; struct { struct rtattr attr; __be32 enckeylen; char keys[1]; } __attribute__((packed)) key = { .attr.rta_len = sizeof(key), .attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */, }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key)); } It caused: BUG: unable to handle kernel paging request at ffff88007ffdc000 PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0 Oops: 0000 [#1] SMP CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155 [...] Call Trace: sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321 crypto_shash_finup+0x1a/0x30 crypto/shash.c:178 shash_digest_unaligned+0x45/0x60 crypto/shash.c:186 crypto_shash_digest+0x24/0x40 crypto/shash.c:202 hmac_setkey+0x135/0x1e0 crypto/hmac.c:66 crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66 shash_async_setkey+0x10/0x20 crypto/shash.c:223 crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202 crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96 crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62 aead_setkey+0xc/0x10 crypto/algif_aead.c:526 alg_setkey crypto/af_alg.c:223 [inline] alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256 __sys_setsockopt+0x6d/0xd0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910 do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Fixes: e236d4a89a2f ("[CRYPTO] authenc: Move enckeylen into key itself") Cc: <stable@vger.kernel.org> # v2.6.25+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-06Linux 5.0-rc1Linus Torvalds1-3/+3
2019-01-06Change mincore() to count "mapped" pages rather than "cached" pagesLinus Torvalds1-81/+13
The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <kevin@guarana.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masatake YAMATO <yamato@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06Fix 'acccess_ok()' on alpha and SHLinus Torvalds2-5/+10
Commit 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") broke both alpha and SH booting in qemu, as noticed by Guenter Roeck. It turns out that the bug wasn't actually in that commit itself (which would have been surprising: it was mostly a no-op), but in how the addition of access_ok() to the strncpy_from_user() and strnlen_user() functions now triggered the case where those functions would test the access of the very last byte of the user address space. The string functions actually did that user range test before too, but they did it manually by just comparing against user_addr_max(). But with user_access_begin() doing the check (using "access_ok()"), it now exposed problems in the architecture implementations of that function. For example, on alpha, the access_ok() helper macro looked like this: #define __access_ok(addr, size) \ ((get_fs().seg & (addr | size | (addr+size))) == 0) and what it basically tests is of any of the high bits get set (the USER_DS masking value is 0xfffffc0000000000). And that's completely wrong for the "addr+size" check. Because it's off-by-one for the case where we check to the very end of the user address space, which is exactly what the strn*_user() functions do. Why? Because "addr+size" will be exactly the size of the address space, so trying to access the last byte of the user address space will fail the __access_ok() check, even though it shouldn't. As a result, the user string accessor functions failed consistently - because they literally don't know how long the string is going to be, and the max access is going to be that last byte of the user address space. Side note: that alpha macro is buggy for another reason too - it re-uses the arguments twice. And SH has another version of almost the exact same bug: #define __addr_ok(addr) \ ((unsigned long __force)(addr) < current_thread_info()->addr_limit.seg) so far so good: yes, a user address must be below the limit. But then: #define __access_ok(addr, size) \ (__addr_ok((addr) + (size))) is wrong with the exact same off-by-one case: the case when "addr+size" is exactly _equal_ to the limit is actually perfectly fine (think "one byte access at the last address of the user address space") The SH version is actually seriously buggy in another way: it doesn't actually check for overflow, even though it did copy the _comment_ that talks about overflow. So it turns out that both SH and alpha actually have completely buggy implementations of access_ok(), but they happened to work in practice (although the SH overflow one is a serious serious security bug, not that anybody likely cares about SH security). This fixes the problems by using a similar macro on both alpha and SH. It isn't trying to be clever, the end address is based on this logic: unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; which basically says "add start and length, and then subtract one unless the length was zero". We can't subtract one for a zero length, or we'd just hit an underflow instead. For a lot of access_ok() users the length is a constant, so this isn't actually as expensive as it initially looks. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-06fscrypt: add Adiantum supportEric Biggers7-188/+468
Add support for the Adiantum encryption mode to fscrypt. Adiantum is a tweakable, length-preserving encryption mode with security provably reducible to that of XChaCha12 and AES-256, subject to a security bound. It's also a true wide-block mode, unlike XTS. See the paper "Adiantum: length-preserving encryption for entry-level processors" (https://eprint.iacr.org/2018/720.pdf) for more details. Also see commit 059c2a4d8e16 ("crypto: adiantum - add Adiantum support"). On sufficiently long messages, Adiantum's bottlenecks are XChaCha12 and the NH hash function. These algorithms are fast even on processors without dedicated crypto instructions. Adiantum makes it feasible to enable storage encryption on low-end mobile devices that lack AES instructions; currently such devices are unencrypted. On ARM Cortex-A7, on 4096-byte messages Adiantum encryption is about 4 times faster than AES-256-XTS encryption; decryption is about 5 times faster. In fscrypt, Adiantum is suitable for encrypting both file contents and names. With filenames, it fixes a known weakness: when two filenames in a directory share a common prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a common prefix too, leaking information. Adiantum does not have this problem. Since Adiantum also accepts long tweaks (IVs), it's also safe to use the master key directly for Adiantum encryption rather than deriving per-file keys, provided that the per-file nonce is included in the IVs and the master key isn't used for any other encryption mode. This configuration saves memory and improves performance. A new fscrypt policy flag is added to allow users to opt-in to this configuration. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-01-06kconfig: rename generated .*conf-cfg to *conf-cfgMasahiro Yamada2-18/+19
Remove the dot-prefixing since it is just a matter of the .gitignore file. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove unnecessary stubs for archheader and archscriptsMasahiro Yamada1-5/+1
Make simply skips a missing rule when it is marked as .PHONY. Remove the dummy targets. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: use assignment instead of define ... endef for filechk_* rulesMasahiro Yamada6-26/+12
You do not have to use define ... endef for filechk_* rules. For simple cases, the use of assignment looks cleaner, IMHO. I updated the usage for scripts/Kbuild.include in case somebody misunderstands the 'define ... endif' is the requirement. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-01-06arch: remove redundant UAPI generic-y definesMasahiro Yamada24-408/+0
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06kbuild: generate asm-generic wrappers if mandatory headers are missingMasahiro Yamada3-10/+10
Some time ago, Sam pointed out a certain degree of overwrap between generic-y and mandatory-y. (https://lkml.org/lkml/2017/7/10/121) I tweaked the meaning of mandatory-y a little bit; now it defines the minimum set of ASM headers that all architectures must have. If arch does not have specific implementation of a mandatory header, Kbuild will let it fallback to the asm-generic one by automatically generating a wrapper. This will allow to drop lots of redundant generic-y defines. Previously, "mandatory" was used in the context of UAPI, but I guess this can be extended to kernel space ASM headers. Suggested-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Sam Ravnborg <sam@ravnborg.org>
2019-01-06arch: remove stale comments "UAPI Header export list"Masahiro Yamada24-25/+0
These comments are leftovers of commit fcc8487d477a ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06riscv: remove redundant kernel-space generic-yMasahiro Yamada1-25/+0
This commit removes redundant generic-y defines in arch/riscv/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: errno.h fcntl.h ioctl.h ioctls.h ipcbuf.h mman.h msgbuf.h param.h poll.h posix_types.h resource.h sembuf.h setup.h shmbuf.h signal.h socket.h sockios.h stat.h statfs.h swab.h termbits.h termios.h types.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: cacheflush.h module.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: change filechk to surround the given command with { }Masahiro Yamada7-12/+14
filechk_* rules often consist of multiple 'echo' lines. They must be surrounded with { } or ( ) to work correctly. Otherwise, only the string from the last 'echo' would be written into the target. Let's take care of that in the 'filechk' in scripts/Kbuild.include to clean up filechk_* rules. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove redundant target cleaning on failureMasahiro Yamada9-25/+16
Since commit 9c2af1c7377a ("kbuild: add .DELETE_ON_ERROR special target"), the target file is automatically deleted on failure. The boilerplate code ... || { rm -f $@; false; } is unneeded. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: clean up rule_dtc_dt_yamlMasahiro Yamada1-2/+2
Commit 3a2429e1faf4 ("kbuild: change if_changed_rule for multi-line recipe") and commit 4f0e3a57d6eb ("kbuild: Add support for DT binding schema checks") came in via different sub-systems. This is a follow-up cleanup. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kbuild: remove UIMAGE_IN and UIMAGE_OUTMasahiro Yamada1-4/+2
The only/last user of UIMAGE_IN/OUT was removed by commit 4722a3e6b716 ("microblaze: fix multiple bugs in arch/microblaze/boot/Makefile"). The input and output should always be $< and $@. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06jump_label: move 'asm goto' support test to KconfigMasahiro Yamada42-119/+65
Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label". The jump label is controlled by HAVE_JUMP_LABEL, which is defined like this: #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) # define HAVE_JUMP_LABEL #endif We can improve this by testing 'asm goto' support in Kconfig, then make JUMP_LABEL depend on CC_HAS_ASM_GOTO. Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will match to the real kernel capability. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
2019-01-06kallsyms: lower alignment on ARMMathias Krause1-2/+2
As mentioned in the info pages of gas, the '.align' pseudo op's interpretation of the alignment value is architecture specific. It might either be a byte value or taken to the power of two. On ARM it's actually the latter which leads to unnecessary large alignments of 16 bytes for 32 bit builds or 256 bytes for 64 bit builds. Fix this by switching to '.balign' instead which is consistent across all architectures. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06scripts: coccinelle: boolinit: drop warnings on named constantsJulia Lawall1-0/+5
Coccinelle doesn't always have access to the values of named (#define) constants, and they may likely often be bound to true and false values anyway, resulting in false positives. So stop warning about them. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06scripts: coccinelle: check for redeclarationJulia Lawall1-0/+3
Avoid reporting on the use of an iterator index variable when the variable is redeclared. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kconfig: remove unused "file" field of yylval unionMasahiro Yamada1-1/+0
This has never been used. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06nds32: remove redundant kernel-space generic-yMasahiro Yamada1-10/+0
This commit removes redundant generic-y defines in arch/nds32/include/asm/Kbuild. [1] It is redundant to define the same generic-y in both arch/$(ARCH)/include/asm/Kbuild and arch/$(ARCH)/include/uapi/asm/Kbuild. Remove the following generic-y: bitsperlong.h bpf_perf_event.h errno.h fcntl.h ioctl.h ioctls.h mman.h shmbuf.h stat.h [2] It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: ftrace.h Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06nios2: remove unneeded HAS_DMA defineMasahiro Yamada1-3/+0
kernel/dma/Kconfig globally defines HAS_DMA as follows: config HAS_DMA bool depends on !NO_DMA default y Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2019-01-05lib/genalloc.c: include vmalloc.hOlof Johansson1-0/+1
Fixes build break on most ARM/ARM64 defconfigs: lib/genalloc.c: In function 'gen_pool_add_virt': lib/genalloc.c:190:10: error: implicit declaration of function 'vzalloc_node'; did you mean 'kzalloc_node'? lib/genalloc.c:190:8: warning: assignment to 'struct gen_pool_chunk *' from 'int' makes pointer from integer without a cast [-Wint-conversion] lib/genalloc.c: In function 'gen_pool_destroy': lib/genalloc.c:254:3: error: implicit declaration of function 'vfree'; did you mean 'kfree'? Fixes: 6862d2fc8185 ('lib/genalloc.c: use vzalloc_node() to allocate the bitmap') Cc: Huang Shijie <sjhuang@iluvatar.ai> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alexey Skidanov <alexey.skidanov@intel.com> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-05dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING for remapped allocationsChristoph Hellwig1-6/+7
We need to return a dma_addr_t even if we don't have a kernel mapping. Do so by consolidating the phys_to_dma call in a single place and jump to it from all the branches that return successfully. Fixes: bfd56cd60521 ("dma-mapping: support highmem in the generic remap allocator") Reported-by: Liviu Dudau <liviu@dudau.co.uk Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Liviu Dudau <liviu@dudau.co.uk>
2019-01-05x86/amd_gart: fix unmapping of non-GART mappingsChristoph Hellwig1-1/+9
In many cases we don't have to create a GART mapping at all, which also means there is nothing to unmap. Fix the range check that was incorrectly modified when removing the mapping_error method. Fixes: 9e8aa6b546 ("x86/amd_gart: remove the mapping_error dma_map_ops method") Reported-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Michal Kubecek <mkubecek@suse.cz>
2019-01-04ia64: fix compile without swiotlbChristoph Hellwig2-1/+3
Some non-generic ia64 configs don't build swiotlb, and thus should not pull in the generic non-coherent DMA infrastructure. Fixes: 68c608345c ("swiotlb: remove dma_mark_clean") Reported-by: Tony Luck <tony.luck@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04x86: re-introduce non-generic memcpy_{to,from}ioLinus Torvalds4-18/+51
This has been broken forever, and nobody ever really noticed because it's purely a performance issue. Long long ago, in commit 6175ddf06b61 ("x86: Clean up mem*io functions") Brian Gerst simplified the memory copies to and from iomem, since on x86, the instructions to access iomem are exactly the same as the regular instructions. That is technically true, and things worked, and nobody said anything. Besides, back then the regular memcpy was pretty simple and worked fine. Nobody noticed except for David Laight, that is. David has a testing a TLP monitor he was writing for an FPGA, and has been occasionally complaining about how memcpy_toio() writes things one byte at a time. Which is completely unacceptable from a performance standpoint, even if it happens to technically work. The reason it's writing one byte at a time is because while it's technically true that accesses to iomem are the same as accesses to regular memory on x86, the _granularity_ (and ordering) of accesses matter to iomem in ways that they don't matter to regular cached memory. In particular, when ERMS is set, we default to using "rep movsb" for larger memory copies. That is indeed perfectly fine for real memory, since the whole point is that the CPU is going to do cacheline optimizations and executes the memory copy efficiently for cached memory. With iomem? Not so much. With iomem, "rep movsb" will indeed work, but it will copy things one byte at a time. Slowly and ponderously. Now, originally, back in 2010 when commit 6175ddf06b61 was done, we didn't use ERMS, and this was much less noticeable. Our normal memcpy() was simpler in other ways too. Because in fact, it's not just about using the string instructions. Our memcpy() these days does things like "read and write overlapping values" to handle the last bytes of the copy. Again, for normal memory, overlapping accesses isn't an issue. For iomem? It can be. So this re-introduces the specialized memcpy_toio(), memcpy_fromio() and memset_io() functions. It doesn't particularly optimize them, but it tries to at least not be horrid, or do overlapping accesses. In fact, this uses the existing __inline_memcpy() function that we still had lying around that uses our very traditional "rep movsl" loop followed by movsw/movsb for the final bytes. Somebody may decide to try to improve on it, but if we've gone almost a decade with only one person really ever noticing and complaining, maybe it's not worth worrying about further, once it's not _completely_ broken? Reported-by: David Laight <David.Laight@aculab.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04Use __put_user_goto in __put_user_size() and unsafe_put_user()Linus Torvalds1-31/+22
This actually enables the __put_user_goto() functionality in unsafe_put_user(). For an example of the effect of this, this is the code generated for the unsafe_put_user(signo, &infop->si_signo, Efault); in the waitid() system call: movl %ecx,(%rbx) # signo, MEM[(struct __large_struct *)_2] It's just one single store instruction, along with generating an exception table entry pointing to the Efault label case in case that instruction faults. Before, we would generate this: xorl %edx, %edx movl %ecx,(%rbx) # signo, MEM[(struct __large_struct *)_3] testl %edx, %edx jne .L309 with the exception table generated for that 'mov' instruction causing us to jump to a stub that set %edx to -EFAULT and then jumped back to the 'testl' instruction. So not only do we now get rid of the extra code in the normal sequence, we also avoid unnecessarily keeping that extra error register live across it all. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04x86 uaccess: Introduce __put_user_gotoLinus Torvalds1-11/+17
This is finally the actual reason for the odd error handling in the "unsafe_get/put_user()" functions, introduced over three years ago. Using a "jump to error label" interface is somewhat odd, but very convenient as a programming interface, and more importantly, it fits very well with simply making the target be the exception handler address directly from the inline asm. The reason it took over three years to actually do this? We need "asm goto" support for it, which only became the default on x86 last year. It's now been a year that we've forced asm goto support (see commit e501ce957a78 "x86: Force asm-goto"), and so let's just do it here too. [ Side note: this commit was originally done back in 2016. The above commentary about timing is obviously about it only now getting merged into my real upstream tree - Linus ] Sadly, gcc still only supports "asm goto" with asms that do not have any outputs, so we are limited to only the put_user case for this. Maybe in several more years we can do the get_user case too. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-05parisc: Remap hugepage-aligned pages in set_kernel_text_rw()Helge Deller1-2/+2
The alternative coding patch for parisc in kernel 4.20 broke booting machines with PA8500-PA8700 CPUs. The problem is, that for such machines the parisc kernel automatically utilizes huge pages to access kernel text code, but the set_kernel_text_rw() function, which is used shortly before applying any alternative patches, didn't used the correctly hugepage-aligned addresses to remap the kernel text read-writeable. Fixes: 3847dab77421 ("parisc: Add alternative coding infrastructure") Cc: <stable@vger.kernel.org> [4.20] Signed-off-by: Helge Deller <deller@gmx.de>
2019-01-04ARM: multi_v7_defconfig: enable CONFIG_UNIPHIER_MDMACMasahiro Yamada1-0/+1
Enable the UniPhier MIO DMAC driver. This is used as the DMA engine for accelerating the SD/eMMC controller drivers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Olof Johansson <olof@lixom.net>
2019-01-04Add CREDITS entry for Shaohua LiJens Axboe1-0/+6
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-04mm/page_io.c: fix polled swap page inJens Axboe1-1/+3
swap_readpage() wants to do polling to bring in pages if asked to, but it doesn't mark the bio as being polled. Additionally, the looping around the blk_poll() check isn't correct - if we get a zero return, we should call io_schedule(), we can't just assume that the bio has completed. The regular bio->bi_private check should be used for that. Link: http://lkml.kernel.org/r/e15243a8-2cdf-c32c-ecee-f289377c8ef9@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>