aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2019-03-25[DO NOT UPSTREAM] integration tree maintainer scriptsjd/wireguardJason A. Donenfeld13-0/+279
People have been asking me how I'm keeping track of the 00/XX cover letter and syncing changes between the out-of-tree module repo and this repo, and how I deal with so many rebases. So this commit shows the scripts to do it. It obviously shouldn't find its way upstream. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2019-03-25net: WireGuard secure network tunnelwireguardJason A. Donenfeld36-0/+7667
WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. Extensive documentation and description of the protocol and considerations, along with formal proofs of the cryptography, are available at: * https://www.wireguard.com/ * https://www.wireguard.com/papers/wireguard.pdf This commit implements WireGuard as a simple network device driver, accessible in the usual RTNL way used by virtual network drivers. It makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of networking subsystem APIs. It has a somewhat novel multicore queueing system designed for maximum throughput and minimal latency of encryption operations, but it is implemented modestly using workqueues and NAPI. Configuration is done via generic Netlink, and following a review from the Netlink maintainer a year ago, several high profile userspace have already implemented the API. This commit also comes with several different tests, both in-kernel tests and out-of-kernel tests based on network namespaces, taking profit of the fact that sockets used by WireGuard intentionally stay in the namespace the WireGuard interface was originally created, exactly like the semantics of userspace tun devices. See wireguard.com/netns/ for pictures and examples. The source code is fairly short, but rather than combining everything into a single file, WireGuard is developed as cleanly separable files, making auditing and comprehension easier. Things are laid out as follows: * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the cryptographic aspects of the protocol, and are mostly data-only in nature, taking in buffers of bytes and spitting out buffers of bytes. They also handle reference counting for their various shared pieces of data, like keys and key lists. * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for ratelimiting certain types of cryptographic operations in accordance with particular WireGuard semantics. * allowedips.[ch], peerlookup.[ch]: The main lookup structures of WireGuard, the former being trie-like with particular semantics, an integral part of the design of the protocol, and the latter just being nice helper functions around the various hashtables we use. * device.[ch]: Implementation of functions for the netdevice and for rtnl, responsible for maintaining the life of a given interface and wiring it up to the rest of WireGuard. * peer.[ch]: Each interface has a list of peers, with helper functions available here for creation, destruction, and reference counting. * socket.[ch]: Implementation of functions related to udp_socket and the general set of kernel socket APIs, for sending and receiving ciphertext UDP packets, and taking care of WireGuard-specific sticky socket routing semantics for the automatic roaming. * netlink.[ch]: Userspace API entry point for configuring WireGuard peers and devices. The API has been implemented by several userspace tools and network management utility, and the WireGuard project distributes the basic wg(8) tool. * queueing.[ch]: Shared function on the rx and tx path for handling the various queues used in the multicore algorithms. * send.c: Handles encrypting outgoing packets in parallel on multiple cores, before sending them in order on a single core, via workqueues and ring buffers. Also handles sending handshake and cookie messages as part of the protocol, in parallel. * receive.c: Handles decrypting incoming packets in parallel on multiple cores, before passing them off in order to be ingested via the rest of the networking subsystem with GRO via the typical NAPI poll function. Also handles receiving handshake and cookie messages as part of the protocol, in parallel. * timers.[ch]: Uses the timer wheel to implement protocol particular event timeouts, and gives a set of very simple event-driven entry point functions for callers. * main.c, version.h: Initialization and deinitialization of the module. * selftest/*.h: Runtime unit tests for some of the most security sensitive functions. * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing script using network namespaces. This commit aims to be as self-contained as possible, implementing WireGuard as a standalone module not needing much special handling or coordination from the network subsystem. I expect for future optimizations to the network stack to positively improve WireGuard, and vice-versa, but for the time being, this exists as intentionally standalone. We introduce a menu option for CONFIG_WIREGUARD, as well as providing a verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: David Miller <davem@davemloft.net> Cc: Greg KH <gregkh@linuxfoundation.org>
2019-03-22security/keys: rewrite big_key crypto to use Zincbig_key_rewriteJason A. Donenfeld2-206/+28
A while back, I noticed that the crypto and crypto API usage in big_keys were entirely broken in multiple ways, so I rewrote it. Now, I'm rewriting it again, but this time using Zinc's ChaCha20Poly1305 function. This makes the file considerably more simple; the diffstat alone should justify this commit. It also should be faster, since it no longer requires a mutex around the "aead api object" (nor allocations), allowing us to encrypt multiple items in parallel. We also benefit from being able to pass any type of pointer to Zinc, so we can get rid of the ridiculously complex custom page allocator that big_key really doesn't need. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: David Howells <dhowells@redhat.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com
2019-03-22zinc: Curve25519 ARM implementationJason A. Donenfeld4-195/+200
This ports the SUPERCOP implementation for usage in kernel space. In addition to the usual header, macro, and style changes required for kernel space, it makes a few small changes to the code: - The stack alignment is relaxed to 16 bytes. - Superfluous mov statements have been removed. - ldr for constants has been replaced with movw. - ldreq has been replaced with moveq. - The str epilogue has been made more idiomatic. - SIMD registers are not pushed and popped at the beginning and end. - The prologue and epilogue have been made idiomatic. - A hole has been removed from the stack, saving 32 bytes. - We write-back the base register whenever possible for vld1.8. - Some multiplications have been reordered for better A7 performance. There are more opportunities for cleanup, since this code is from qhasm, which doesn't always do the most opportune thing. But even prior to extensive hand optimizations, this code delivers significant performance improvements (given in get_cycles() per call): ----------- ------------- | generic C | this commit | ------------ ----------- ------------- | Cortex-A7 | 49136 | 22395 | ------------ ----------- ------------- | Cortex-A17 | 17326 | 4983 | ------------ ----------- ------------- Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Russell King <linux@armlinux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: import Bernstein and Schwabe's Curve25519 ARM implementationJason A. Donenfeld1-0/+2105
This comes from Dan Bernstein and Peter Schwabe's public domain NEON code, and is included here in raw form so that subsequent commits that fix these up for the kernel can see how it has changed. This code does have some entirely cosmetic formatting differences, adding indentation and so forth, so that when we actually port it for use in the kernel in the subsequent commit, it's obvious what's changed in the process. This code originates from SUPERCOP 20180818, available at <https://bench.cr.yp.to/supercop.html>. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Russell King <linux@armlinux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Curve25519 x86_64 implementationJason A. Donenfeld3-0/+2378
This implementation is the fastest available x86_64 implementation, and unlike Sandy2x, it doesn't requie use of the floating point registers at all. Instead it makes use of BMI2 and ADX, available on recent microarchitectures. The implementation was written by Armando Faz-Hernández with contributions (upstream) from Samuel Neves and me, in addition to further changes in the kernel implementation from us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Cc: Armando Faz-Hernández <armfazh@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: x86@kernel.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Curve25519 generic C implementations and selftestJason A. Donenfeld7-0/+3095
This contains two formally verified C implementations of the Curve25519 scalar multiplication function, one for 32-bit systems, and one for 64-bit systems whose compiler supports efficient 128-bit integer types. Not only are these implementations formally verified, but they are also the fastest available C implementations. They have been modified to be friendly to kernel space and to be generally less horrendous looking, but still an effort has been made to retain their formally verified characteristic, and so the C might look slightly unidiomatic. The 64-bit version comes from HACL*: https://github.com/project-everest/hacl-star The 32-bit version comes from Fiat: https://github.com/mit-plv/fiat-crypto Information: https://cr.yp.to/ecdh.html Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Karthikeyan Bhargavan <karthik.bhargavan@gmail.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: BLAKE2s x86_64 implementationJason A. Donenfeld4-0/+761
These implementations from Samuel Neves support AVX and AVX-512VL. Originally this used AVX-512F, but Skylake thermal throttling made AVX-512VL more attractive and possible to do with negligable difference. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: x86@kernel.org Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: BLAKE2s generic C implementation and selftestJason A. Donenfeld5-0/+2447
The C implementation was originally based on Samuel Neves' public domain reference implementation but has since been heavily modified for the kernel. We're able to do compile-time optimizations by moving some scaffolding around the final function into the header file. Information: https://blake2.net/ Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: ChaCha20Poly1305 construction and selftestJason A. Donenfeld5-0/+9451
This is an implementation of the ChaCha20Poly1305 AEAD, with an easy API for encrypting either contiguous buffers or scatter gather lists (such as those created from skb_to_sgvec). Information: https://tools.ietf.org/html/rfc8439 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Poly1305 MIPS64 and MIPS32r2 implementationsJason A. Donenfeld5-1/+917
The MIPS64 accelerated implementation comes from Andy Polyakov's implementation and provides nice speedups on commodity Octeon hardware. While this is CRYPTOGAMS code, the originating code for this happens to be the same as OpenSSL's commit 9925a82146f2503b4dd11423d0eba649b43450c0 This MIPS32r2 implementation comes from René van Dorst and me and results in a nice speedup on the usual OpenWRT targets. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: René van Dorst <opensource@vdorst.com> Co-developed-by: René van Dorst <opensource@vdorst.com> Co-developed-by: Andy Polyakov <appro@openssl.org> Cc: Andy Polyakov <appro@openssl.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Poly1305 ARM and ARM64 implementationsJason A. Donenfeld5-1/+2395
These ARM implementations come from Andy Polyakov's implementations, and perform extremely well on ARMv4+, with optimized paths for NEON and non-NEON. The NEON code uses base 2^26, while the scalar code uses base 2^64 on 64-bit and base 2^32 on 32-bit. If we hit the unfortunate situation of using NEON and then having to go back to scalar -- because the user is silly and has called the update function from two separate contexts -- then we need to convert back to the original base before proceeding. It is possible to reason that the initial reduction below is sufficient given the implementation invariants. However, for an avoidance of doubt and because this is not performance critical, we do the full reduction anyway. This conversion is found in the glue code, and a proof of correctness may be easily obtained from Z3: <https://xn--4db.cc/ltPtHCKN/py>. While this is CRYPTOGAMS code, the originating code for this happens to be the same as OpenSSL's commit 9925a82146f2503b4dd11423d0eba649b43450c0 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Co-developed-by: Andy Polyakov <appro@openssl.org> Cc: Andy Polyakov <appro@openssl.org> Cc: Russell King <linux@armlinux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Poly1305 x86_64 implementationJason A. Donenfeld4-0/+4426
These x86_64 vectorized implementations are based on Andy Polyakov's implementation, and support AVX, AVX-2, and AVX512F. The AVX-512F implementation is disabled on Skylake, due to throttling. The AVX code uses base 2^26, while the scalar code uses base 2^64. If we hit the unfortunate situation of using AVX and then having to go back to scalar -- because the user is silly and has called the update function from two separate contexts -- then we need to convert back to the original base before proceeding. It is possible to reason that the initial reduction below is sufficient given the implementation invariants. However, for an avoidance of doubt and because this is not performance critical, we do the full reduction anyway. This conversion is found in the glue code, and a proof of correctness may be easily obtained from Z3: <https://xn--4db.cc/ltPtHCKN/py>. On the left is cycle counts on a Core i7 6700HQ using the AVX-2 codepath, comparing this implementation ("new") to the implementation in the current crypto api ("old"). On the right are benchmarks on a Xeon Gold 5120 using the AVX-512 codepath. The difference is so stark, because the current crypto api's implementation does not support AVX-512 at all. AVX-2 AVX-512 --------- ----------- size old new size old new ---- ---- ---- ---- ---- ---- 0 70 68 0 74 70 16 92 90 16 96 92 32 134 104 32 136 106 48 172 120 48 184 124 64 218 136 64 218 138 80 254 158 80 260 160 96 298 174 96 300 176 112 342 192 112 342 194 128 388 212 128 384 212 144 428 228 144 420 226 160 466 246 160 464 248 176 510 264 176 504 264 192 550 282 192 544 282 208 594 302 208 582 300 224 628 316 224 624 318 240 676 334 240 662 338 256 716 354 256 708 358 272 764 374 272 748 372 288 802 352 288 788 358 304 420 366 304 422 370 320 428 360 320 432 364 336 484 378 336 486 380 352 426 384 352 434 390 368 478 400 368 480 408 384 488 394 384 490 398 400 542 408 400 542 412 416 486 416 416 492 426 432 534 430 432 538 436 448 544 422 448 546 432 464 600 438 464 600 448 480 540 448 480 548 456 496 594 464 496 594 476 512 602 456 512 606 470 528 656 476 528 656 480 544 600 480 544 606 498 560 650 494 560 652 512 576 664 490 576 662 508 592 714 508 592 716 522 608 656 514 608 664 538 624 708 532 624 710 552 640 716 524 640 720 516 656 770 536 656 772 526 672 716 548 672 722 544 688 770 562 688 768 556 704 774 552 704 778 556 720 826 568 720 832 568 736 768 574 736 780 584 752 822 592 752 826 600 768 830 584 768 836 560 784 884 602 784 888 572 800 828 610 800 838 588 816 884 628 816 884 604 832 888 618 832 894 598 848 942 632 848 946 612 864 884 644 864 896 628 880 936 660 880 942 644 896 948 652 896 952 608 912 1000 664 912 1004 616 928 942 676 928 954 634 944 994 690 944 1000 646 960 1002 680 960 1008 646 976 1054 694 976 1062 658 992 1002 706 992 1012 674 1008 1052 720 1008 1058 690 While this is CRYPTOGAMS code, the originating code for this happens to be derived from OpenSSL's commit 4dfe4310c31c4483705991d9a798ce9be1ed1c68 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Andy Polyakov <appro@openssl.org> Cc: Andy Polyakov <appro@openssl.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: x86@kernel.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: Poly1305 generic C implementations and selftestJason A. Donenfeld7-0/+1685
These two C implementations -- a 32x32 one and a 64x64 one, depending on the platform -- come from Andrew Moon's public domain poly1305-donna portable code, modified for usage in the kernel and for usage with accelerated primitives. Information: https://cr.yp.to/mac.html Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: ChaCha20 MIPS32r2 implementationJason A. Donenfeld4-0/+455
This MIPS32r2 implementation comes from René van Dorst and me and results in a nice speedup on the usual OpenWRT targets. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: René van Dorst <opensource@vdorst.com> Co-developed-by: René van Dorst <opensource@vdorst.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: ChaCha20 ARM and ARM64 implementationsJason A. Donenfeld6-1/+2954
These ARM implementations come from Andy Polyakov's implementation, as well as an ultra-fast unrolled scalar version optimized for ARMv7 from Eric Biggers, which performs well on platforms with slow NEON like Cortex A7 and A5. This commit delivers approximately the same or much better performance than the existing crypto API's code and has been measured to do as such on: - ARM1176JZF-S [ARMv6] - Cortex-A7 [ARMv7] - Cortex-A8 [ARMv7] - Cortex-A9 [ARMv7] - Cortex-A17 [ARMv7] - Cortex-A53 [ARMv8] - Cortex-A55 [ARMv8] - Cortex-A73 [ARMv8] - Cortex-A75 [ARMv8] Interestingly, Andy Polyakov's scalar code is slower than Eric Biggers', but is also significantly shorter. This has the advantage that it does not evict other code from L1 cache -- particularly on ARM11 chips -- and so in certain circumstances it can actually be faster. However, it wasn't found that this had an affect on any code existing in the kernel today. While this is CRYPTOGAMS code, the originating code for this happens to be the same as OpenSSL's commit 9925a82146f2503b4dd11423d0eba649b43450c0 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Co-authored-by: Andy Polyakov <appro@openssl.org> Co-authored-by: Eric Biggers <ebiggers@google.com> Cc: Andy Polyakov <appro@openssl.org> Cc: Russell King <linux@armlinux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: ChaCha20 x86_64 implementationJason A. Donenfeld4-0/+4221
These x86_64 vectorized implementations are based on Andy Polyakov's implementations, and support SSSE3, AVX-2, AVX-512F, and AVX-512VL. The AVX-512F implementation is disabled on Skylake, due to throttling, and the VL ymm implementation is used instead on that platform; other AVX-512 microarchitectures use AVX-512F. On the left is cycle counts on a Core i7 6700HQ using the AVX-2 codepath, comparing this implementation ("new") to the implementation in the current crypto api ("old"). On the right are benchmarks on a Xeon Gold 5120 using the AVX-512 codepath. The difference is so stark, because the current crypto api's implementation does not support AVX-512 at all. AVX-2 AVX-512 --------- ----------- size old new size old new ---- ---- ---- ---- ---- ---- 0 62 52 0 64 54 16 414 376 16 386 372 32 410 400 32 388 396 48 414 422 48 388 420 64 362 356 64 366 350 80 714 666 80 708 666 96 714 700 96 708 692 112 712 718 112 706 736 128 692 646 128 692 648 144 1042 674 144 1036 682 160 1042 694 160 1036 708 176 1042 726 176 1036 730 192 1018 650 192 1016 658 208 1366 686 208 1360 684 224 1366 696 224 1362 708 240 1366 722 240 1360 732 256 640 656 256 644 500 272 988 1246 272 990 526 288 988 1276 288 988 556 304 992 1296 304 988 576 320 972 1222 320 972 500 336 1318 1256 336 1314 532 352 1318 1276 352 1316 558 368 1316 1294 368 1318 578 384 1294 1218 384 1308 506 400 1642 1258 400 1644 532 416 1642 1282 416 1644 556 432 1642 1302 432 1644 594 448 1628 1224 448 1624 508 464 1970 1258 464 1970 534 480 1970 1280 480 1970 556 496 1970 1300 496 1968 582 512 656 676 512 660 624 528 1010 1290 528 1016 682 544 1010 1306 544 1016 702 560 1010 1332 560 1018 728 576 986 1254 576 998 654 592 1340 1284 592 1344 680 608 1334 1310 608 1344 708 624 1340 1334 624 1344 730 640 1314 1254 640 1326 654 656 1664 1282 656 1670 686 672 1674 1306 672 1670 708 688 1662 1336 688 1670 732 704 1638 1250 704 1652 658 720 1992 1292 720 1998 682 736 1994 1308 736 1998 710 752 1988 1334 752 1996 734 768 1252 1254 768 1256 662 784 1596 1290 784 1606 688 800 1596 1314 800 1606 714 816 1596 1330 816 1606 736 832 1576 1256 832 1584 660 848 1922 1286 848 1948 688 864 1922 1314 864 1950 714 880 1926 1338 880 1948 736 896 1898 1258 896 1912 688 912 2248 1288 912 2258 718 928 2248 1320 928 2258 744 944 2248 1338 944 2256 768 960 2226 1268 960 2238 692 976 2574 1288 976 2584 718 992 2576 1312 992 2584 744 1008 2574 1340 1008 2584 770 While this is CRYPTOGAMS code, the originating code for this happens to be derived from OpenSSL's commit cded951378069a478391843f5f8653c1eb5128da Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Samuel Neves <sneves@dei.uc.pt> Co-developed-by: Andy Polyakov <appro@openssl.org> Cc: Andy Polyakov <appro@openssl.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: x86@kernel.org Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: ChaCha20 generic C implementation and selftestJason A. Donenfeld6-0/+3002
This implements the ChaCha20 permutation as a single C statement, by way of the comma operator, which the compiler is able to simplify terrifically. Information: https://cr.yp.to/chacha.html Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22zinc: introduce minimal cryptography libraryzincJason A. Donenfeld5-0/+57
Zinc stands for "Zinc Is Neat Crypto" or "Zinc as IN Crypto". It's also short, easy to type, and plays nicely with the recent trend of naming crypto libraries after elements. The guiding principle is "don't overdo it". It's less of a library and more of a directory tree for organizing well-curated direct implementations of cryptography primitives. Zinc is a new cryptography API that is much more minimal and lower-level than the current one. It intends to complement it and provide a basis upon which the current crypto API might build, as the provider of software implementations of cryptographic primitives. It is motivated by three primary observations in crypto API design: * Highly composable "cipher modes" and related abstractions from the 90s did not turn out to be as terrific an idea as hoped, leading to a host of API misuse problems. * Most programmers are afraid of crypto code, and so prefer to integrate it into libraries in a highly abstracted manner, so as to shield themselves from implementation details. Cryptographers, on the other hand, prefer simple direct implementations, which they're able to verify for high assurance and optimize in accordance with their expertise. * Overly abstracted and flexible cryptography APIs lead to a host of dangerous problems and performance issues. The kernel is in the business usually not of coming up with new uses of crypto, but rather implementing various constructions, which means it essentially needs a library of primitives, not a highly abstracted enterprise-ready pluggable system, with a few particular exceptions. This last observation has seen itself play out several times over and over again within the kernel: * The perennial move of actual primitives away from crypto/ and into lib/, so that users can actually call these functions directly with no overhead and without lots of allocations, function pointers, string specifier parsing, and general clunkiness. For example: sha256, chacha20, siphash, sha1, and so forth live in lib/ rather than in crypto/. Zinc intends to stop the cluttering of lib/ and introduce these direct primitives into their proper place, lib/zinc/. * An abundance of misuse bugs with the present crypto API that have been very unpleasant to clean up. * A hesitance to even use cryptography, because of the overhead and headaches involved in accessing the routines. Zinc goes in a rather different direction. Rather than providing a thoroughly designed and abstracted API, Zinc gives you simple functions, which implement some primitive, or some particular and specific construction of primitives. It is not dynamic in the least, though one could imagine implementing a complex dynamic dispatch mechanism (such as the current crypto API) on top of these basic functions. After all, dynamic dispatch is usually needed for applications with cipher agility, such as IPsec, dm-crypt, AF_ALG, and so forth, and the existing crypto API will continue to play that role. However, Zinc will provide a non- haphazard way of directly utilizing crypto routines in applications that do have neither the need nor desire for abstraction and dynamic dispatch. It also organizes the implementations in a simple, straight-forward, and direct manner, making it enjoyable and intuitive to work on. Rather than moving optimized assembly implementations into arch/, it keeps them all together in lib/zinc/, making it simple and obvious to compare and contrast what's happening. This is, notably, exactly what the lib/raid6/ tree does, and that seems to work out rather well. It's also the pattern of most successful crypto libraries. The architecture- specific glue-code is made a part of each translation unit, rather than being in a separate one, so that generic and architecture-optimized code are combined at compile-time, and incompatibility branches compiled out by the optimizer. All implementations have been extensively tested and fuzzed, and are selected for their quality, trustworthiness, and performance. Wherever possible and performant, formally verified implementations are used, such as those from HACL* [1] and Fiat-Crypto [2]. The routines also take special care to zero out secrets using memzero_explicit (and future work is planned to have gcc do this more reliably and performantly with compiler plugins). The performance of the selected implementations is state-of-the-art and unrivaled on a broad array of hardware, though of course we will continue to fine tune these to the hardware demands needed by kernel contributors. Each implementation also comes with extensive self-tests and crafted test vectors, pulled from various places such as Wycheproof [9]. Regularity of function signatures is important, so that users can easily "guess" the name of the function they want. Though, individual primitives are oftentimes not trivially interchangeable, having been designed for different things and requiring different parameters and semantics, and so the function signatures they provide will directly reflect the realities of the primitives' usages, rather than hiding it behind (inevitably leaky) abstractions. Also, in contrast to the current crypto API, Zinc functions can work on stack buffers, and can be called with different keys, without requiring allocations or locking. SIMD is used automatically when available, though some routines may benefit from either having their SIMD disabled for particular invocations, or to have the SIMD initialization calls amortized over several invocations of the function, and so Zinc utilizes function signatures enabling that in conjunction with the recently introduced simd_context_t. More generally, Zinc provides function signatures that allow just what is required by the various callers. This isn't to say that users of the functions will be permitted to pollute the function semantics with weird particular needs, but we are trying very hard not to overdo it, and that means looking carefully at what's actually necessary, and doing just that, and not much more than that. Remember: practicality and cleanliness rather than over-zealous infrastructure. Zinc provides also an opening for the best implementers in academia to contribute their time and effort to the kernel, by being sufficiently simple and inviting. In discussing this commit with some of the best and brightest over the last few years, there are many who are eager to devote rare talent and energy to this effort. To summarize, Zinc will contain implementations of cryptographic primitives that are: * Software-based and synchronous. * Expose a simple API that operate over plain chunks of data and do not need significant cumbersome scaffolding (more below). * Extremely fast, but also, in order of priority: 1) formally verified, 2) well-known code that's received a lot of eyeballs and has seen significant real-world usage, 3) simple reviewable code that is either obviously correct or hard to screw up given test-vectors that has been subjected to significant amounts of fuzzing and projects like Wycheproof. The APIs of each implementation are generally expected to take the following forms, with accepted variations for primitives that have non-standard input or output parameters: * For hash functions the classic init/update/final dance is well-known and clear to implement. Different init functions can instantiate different operating parameters of flexible hash functions (such as blake2s_init and blake2s_init_key). * Authenticated encryption functions return a simple boolean indicating whether or not decryption succeeded. They take as inputs and outputs either a pointer to a buffer and a size, or they take as inputs and outputs scatter-gather lists (for use with the network subsystem's skb_to_sgvec, for example). * Functions that are commonly called in loops inside a long-running worker thread may grow to take a simd_context_t parameter, so that the FPU can be twiddled from outside of the function (see prior commit introducing simd_get/put/relax). It is expected that most functions that fit this scenario will be ones that take scatter-gather lists. In addition to the above implementation and API considerations, inclusion criteria for Zinc will be mostly the same as for other aspects of the kernel: is there a direct user of the primitive or construction's Zinc implementation that would immediately benefit from that kind of API? Certain primitives, like MD5 for example, might be separated off into a legacy/ header subdirectory, to make it clear to new users that if they're using it, it's for a very particular purpose. Zinc is also wary of adding overly newfangled and unvetted primitives that have no immediate uptake or scrutiny: for example, implementations of a new block cipher posted on eprint just yesterday. But beyond that, we recognize that cryptographic functions have many different uses and are required in large variety of standards and circumstances, whose decisions are often made outside the scope of the kernel, and so Zinc will strive to accommodate the writing of clean and effective code and will not discriminate on the basis of fashionability. Following the merging of this, I expect for the primitives that currently exist in lib/ to work their way into lib/zinc/, after intense scrutiny of each implementation, potentially replacing them with either formally-verified implementations, or better studied and faster state-of-the-art implementations. Also following the merging of this, I expect for the old crypto API implementations to be ported over to use Zinc for their software-based implementations. As Zinc is simply library code, its config options are un-menued, with the exception of CONFIG_ZINC_SELFTEST and CONFIG_ZINC_DEBUG, which enables various selftests and debugging conditions. [1] https://github.com/project-everest/hacl-star [2] https://github.com/mit-plv/fiat-crypto [3] https://cr.yp.to/ecdh.html [4] https://cr.yp.to/chacha.html [5] https://cr.yp.to/snuffle/xsalsa-20081128.pdf [6] https://cr.yp.to/mac.html [7] https://blake2.net/ [8] https://tools.ietf.org/html/rfc8439 [9] https://github.com/google/wycheproof Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org
2019-03-22asm: simd context helper APIsimdJason A. Donenfeld27-8/+224
Sometimes it's useful to amortize calls to XSAVE/XRSTOR and the related FPU/SIMD functions over a number of calls, because FPU restoration is quite expensive. This adds a simple header for carrying out this pattern: simd_context_t simd_context; simd_get(&simd_context); while ((item = get_item_from_queue()) != NULL) { encrypt_item(item, &simd_context); simd_relax(&simd_context); } simd_put(&simd_context); The relaxation step ensures that we don't trample over preemption, and the get/put API should be a familiar paradigm in the kernel. On the other end, code that actually wants to use SIMD instructions can accept this as a parameter and check it via: void encrypt_item(struct item *item, simd_context_t *simd_context) { if (item->len > LARGE_FOR_SIMD && simd_use(simd_context)) wild_simd_code(item); else boring_scalar_code(item); } The actual XSAVE happens during simd_use (and only on the first time), so that if the context is never actually used, no performance penalty is hit. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Samuel Neves <sneves@dei.uc.pt> Cc: Andy Lutomirski <luto@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: linux-arch@vger.kernel.org
2019-03-21Merge branch 'Refactor-flower-classifier-to-remove-dependency-on-rtnl-lock'David S. Miller1-108/+325
Vlad Buslov says: ==================== Refactor flower classifier to remove dependency on rtnl lock Currently, all netlink protocol handlers for updating rules, actions and qdiscs are protected with single global rtnl lock which removes any possibility for parallelism. This patch set is a third step to remove rtnl lock dependency from TC rules update path. Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added. TC rule update handlers (RTM_NEWTFILTER, RTM_DELTFILTER, etc.) are already registered with this flag and only take rtnl lock when qdisc or classifier requires it. Classifiers can indicate that their ops callbacks don't require caller to hold rtnl lock by setting the TCF_PROTO_OPS_DOIT_UNLOCKED flag. The goal of this change is to refactor flower classifier to support unlocked execution and register it with unlocked flag. This patch set implements following changes to make flower classifier concurrency-safe: - Implement reference counting for individual filters. Change fl_get to take reference to filter. Implement tp->ops->put callback that was introduced in cls API patch set to release reference to flower filter. - Use tp->lock spinlock to protect internal classifier data structures from concurrent modification. - Handle concurrent tcf proto deletion by returning EAGAIN, which will cause cls API to retry and create new proto instance or return error to the user (depending on message type). - Handle concurrent insertion of filter with same priority and handle by returning EAGAIN, which will cause cls API to lookup filter again and process it accordingly to netlink message flags. - Extend flower mask with reference counting and protect masks list with masks_lock spinlock. - Prevent concurrent mask insertion by inserting temporary value to masks hash table. This is necessary because mask initialization is a sleeping operation and cannot be done while holding tp->lock. Both chain level and classifier level conflicts are resolved by returning -EAGAIN to cls API that results restart of whole operation. This retry mechanism is a result of fine-grained locking approach used in this and previous changes in series and is necessary to allow concurrent updates on same chain instance. Alternative approach would be to lock the whole chain while updating filters on any of child tp's, adding and removing classifier instances from the chain. However, since most CPU-intensive parts of filter update code are specifically in classifier code and its dependencies (extensions and hw offloads), such approach would negate most of the gains introduced by this change and previous changes in the series when updating same chain instance. Tcf hw offloads API is not changed by this patch set and still requires caller to hold rtnl lock. Refactored flower classifier tracks rtnl lock state by means of 'rtnl_held' flag provided by cls API and obtains the lock before calling hw offloads. Following patch set will lift this restriction and refactor cls hw offloads API to support unlocked execution. With these changes flower classifier is safely registered with TCF_PROTO_OPS_DOIT_UNLOCKED flag in last patch. Changes from V2 to V3: - Rebase on latest net-next Changes from V1 to V2: - Extend cover letter with explanation about retry mechanism. - Rebase on current net-next. - Patch 1: - Use rcu_dereference_raw() for tp->root dereference. - Update comment in fl_head_dereference(). - Patch 2: - Remove redundant check in fl_change error handling code. - Add empty line between error check and new handle assignment. - Patch 3: - Refactor loop in fl_get_next_filter() to improve readability. - Patch 4: - Refactor __fl_delete() to improve readability. - Patch 6: - Fix comment in fl_check_assign_mask(). - Patch 9: - Extend commit message. - Fix error code in comment. - Patch 11: - Fix fl_hw_replace_filter() to always release rtnl lock in error handlers. - Patch 12: - Don't take rtnl lock before calling __fl_destroy_filter() in workqueue context. - Extend commit message with explanation why flower still takes rtnl lock before calling hardware offloads API. Github: <https://github.com/vbuslov/linux/tree/unlocked-flower-cong3> ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: set unlocked flag for flower proto opsVlad Buslov1-2/+1
Set TCF_PROTO_OPS_DOIT_UNLOCKED for flower classifier to indicate that its ops callbacks don't require caller to hold rtnl lock. Don't take rtnl lock in fl_destroy_filter_work() that is executed on workqueue instead of being called by cls API and is not affected by setting TCF_PROTO_OPS_DOIT_UNLOCKED. Rtnl mutex is still manually taken by flower classifier before calling hardware offloads API that has not been updated for unlocked execution. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: track rtnl lock stateVlad Buslov1-26/+56
Use 'rtnl_held' flag to track if caller holds rtnl lock. Propagate the flag to internal functions that need to know rtnl lock state. Take rtnl lock before calling tcf APIs that require it (hw offload, bind filter, etc.). Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: protect flower classifier state with spinlockVlad Buslov1-7/+32
struct tcf_proto was extended with spinlock to be used by classifiers instead of global rtnl lock. Use it to protect shared flower classifier data structures (handle_idr, mask hashtable and list) and fields of individual filters that can be accessed concurrently. This patch set uses tcf_proto->lock as per instance lock that protects all filters on tcf_proto. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: handle concurrent tcf proto deletionVlad Buslov1-0/+8
Without rtnl lock protection tcf proto can be deleted concurrently. Check tcf proto 'deleting' flag after taking tcf spinlock to verify that no concurrent deletion is in progress. Return EAGAIN error if concurrent deletion detected, which will cause caller to retry and possibly create new instance of tcf proto. Retry mechanism is a result of fine-grained locking approach used in this and previous changes in series and is necessary to allow concurrent updates on same chain instance. Alternative approach would be to lock the whole chain while updating filters on any of child tp's, adding and removing classifier instances from the chain. However, since most CPU-intensive parts of filter update code are specifically in classifier code and its dependencies (extensions and hw offloads), such approach would negate most of the gains introduced by this change and previous changes in the series when updating same chain instance. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: handle concurrent filter insertion in fl_changeVlad Buslov1-0/+9
Check if user specified a handle and another filter with the same handle was inserted concurrently. Return EAGAIN to retry filter processing (in case it is an overwrite request). Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: protect masks list with spinlockVlad Buslov1-0/+8
Protect modifications of flower masks list with spinlock to remove dependency on rtnl lock and allow concurrent access. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: handle concurrent mask insertionVlad Buslov1-7/+34
Without rtnl lock protection masks with same key can be inserted concurrently. Insert temporary mask with reference count zero to masks hashtable. This will cause any concurrent modifications to retry. Wait for rcu grace period to complete after removing temporary mask from masks hashtable to accommodate concurrent readers. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Suggested-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: add reference counter to flower maskVlad Buslov1-5/+17
Extend fl_flow_mask structure with reference counter to allow parallel modification without relying on rtnl lock. Use rcu read lock to safely lookup mask and increment reference counter in order to accommodate concurrent deletes. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: track filter deletion with flagVlad Buslov1-10/+29
In order to prevent double deletion of filter by concurrent tasks when rtnl lock is not used for synchronization, add 'deleted' filter field. Check value of this field when modifying filters and return error if concurrent deletion is detected. Refactor __fl_delete() to accept pointer to 'last' boolean as argument, and return error code as function return value instead. This is necessary to signal concurrent filter delete to caller. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: introduce reference counting for filtersVlad Buslov1-14/+82
Extend flower filters with reference counting in order to remove dependency on rtnl lock in flower ops and allow to modify filters concurrently. Reference to flower filter can be taken/released concurrently as soon as it is marked as 'unlocked' by last patch in this series. Use atomic reference counter type to make concurrent modifications safe. Always take reference to flower filter while working with it: - Modify fl_get() to take reference to filter. - Implement tp->put() callback as fl_put() function to allow cls API to release reference taken by fl_get(). - Modify fl_change() to assume that caller holds reference to fold and take reference to fnew. - Take reference to filter while using it in fl_walk(). Implement helper functions to get/put filter reference counter. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: refactor fl_changeVlad Buslov1-39/+41
As a preparation for using classifier spinlock instead of relying on external rtnl lock, rearrange code in fl_change. The goal is to group the code which changes classifier state in single block in order to allow following commits in this set to protect it from parallel modification with tp->lock. Data structures that require tp->lock protection are mask hashtable and filters list, and classifier handle_idr. fl_hw_replace_filter() is a sleeping function and cannot be called while holding a spinlock. In order to execute all sequence of changes to shared classifier data structures atomically, call fl_hw_replace_filter() before modifying them. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: sched: flower: don't check for rtnl on head dereferenceVlad Buslov1-7/+17
Flower classifier only changes root pointer during init and destroy. Cls API implements reference counting for tcf_proto, so there is no danger of concurrent access to tp when it is being destroyed, even without protection provided by rtnl lock. Implement new function fl_head_dereference() to dereference tp->root without checking for rtnl lock. Use it in all flower function that obtain head pointer instead of rtnl_dereference(). Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21nfp: remove defines for unused control bitsJakub Kicinski2-5/+1
NFP driver ABI contains bits for L2 switching which were never implemented in initially envisioned form. Remove the defines, and open up the possibility of reclaiming the bits for other uses. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21Merge branch 'rhashtable-cleanups'David S. Miller3-74/+41
NeilBrown says: ==================== Two clean-ups for rhashtable. These two patches make small improvements to rhashtable, but are otherwise unrelated. Thanks to Herbert, Miguel, and Paul for the review. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21rhashtable: rename rht_for_each*continue as *from.NeilBrown3-25/+25
The pattern set by list.h is that for_each..continue() iterators start at the next entry after the given one, while for_each..from() iterators start at the given entry. The rht_for_each*continue() iterators are documented as though the start at the 'next' entry, but actually start at the given entry, and they are used expecting that behaviour. So fix the documentation and change the names to *from for consistency with list.h Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21rhashtable: don't hold lock on first table throughout insertion.NeilBrown2-49/+16
rhashtable_try_insert() currently holds a lock on the bucket in the first table, while also locking buckets in subsequent tables. This is unnecessary and looks like a hold-over from some earlier version of the implementation. As insert and remove always lock a bucket in each table in turn, and as insert only inserts in the final table, there cannot be any races that are not covered by simply locking a bucket in each table in turn. When an insert call reaches that last table it can be sure that there is no matchinf entry in any other table as it has searched them all, and insertion never happens anywhere but in the last table. The fact that code tests for the existence of future_tbl while holding a lock on the relevant bucket ensures that two threads inserting the same key will make compatible decisions about which is the "last" table. This simplifies the code and allows the ->rehash field to be discarded. We still need a way to ensure that a dead bucket_table is never re-linked by rhashtable_walk_stop(). This can be achieved by calling call_rcu() inside the locked region, and checking with rcu_head_after_call_rcu() in rhashtable_walk_stop() to see if the bucket table is empty and dead. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Paul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21Merge branch 'net-phy-Move-Omega-PHY-entry-to-Cygnus-PHY-driver'David S. Miller5-75/+225
Florian Fainelli says: ==================== net: phy: Move Omega PHY entry to Cygnus PHY driver In order to pave the way for adding some specific Omega PHY features that may not be desirable on other products covered by the bcm7xxx PHY driver, split the Omega PHY entry into the Cygnus PHY driver such that the PHY drivers are reflective of product lines/business units maintaining them within Broadcom. No functional changes intended. ==================== Acked-by: Arun Parameswaran <arun.parameswaran@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: phy: Move Omega PHY entry to Cygnus PHY driverFlorian Fainelli3-4/+149
Cygnus and Omega are part of the same business unit and product line, it makes sense to group PHY entries by products such that a platform can select only the drivers that it needs. Bring all the functionality that the BCM7XXX_28NM_GPHY() macro hides for us and remove the Omega PHY entry from bcm7xxx.c. As an added bonus, we now have a proper mdio_device_id entry to permit auto-loading. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: phy: Prepare for moving Omega out of bcm7xxxFlorian Fainelli3-71/+76
The Omega PHY entry was added to bcm7xxx.c out of convenience and this breaks the one driver per product line paradigm that was applied up until now. Since the AFE initialization is shared between Omega and BCM7xxx move the relevant functions to bcm-phy-lib.[ch]. No functional changes introduced. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: dst: remove gc leftoversJulian Wiedmann3-29/+1
Get rid of some obsolete gc-related documentation and macros that were missed in commit 5b7c9a8ff828 ("net: remove dst gc related code"). CC: Wei Wang <weiwan@google.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21Merge branch 'net-broadcom-Remove-print-of-base-address'David S. Miller3-8/+9
Florian Fainelli says: ==================== net: broadcom: Remove print of base address Some broadcom MDIO/switch/Ethernet MAC drivers insist on printing the base register virtual address which has little value. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: systemport: Remove print of base addressFlorian Fainelli1-3/+3
Since commit ad67b74d2469 ("printk: hash addresses printed with %p") pointers are being hashed when printed. Displaying the virtual memory at bootup time is not helpful, especially given we use a dev_info() which already displays the platform device's address. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: dsa: bcm_sf2: Remove print of base addressFlorian Fainelli1-4/+5
Since commit ad67b74d2469 ("printk: hash addresses printed with %p") pointers are being hashed when printed. Displaying the virtual memory at bootup time is not helpful, we use a dev_info() print which already displays the platform device's address. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21net: phy: mdio-bcm-unimac: Remove print of base addressFlorian Fainelli1-1/+1
Since commit ad67b74d2469 ("printk: hash addresses printed with %p") pointers are being hashed when printed. Displaying the virtual memory at bootup time is not helpful, especially given we use a dev_info() which already displays the platform device's address. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21ipv6: Remove fallback argument from ip6_hold_safeDavid Ahern1-7/+6
net and null_fallback are redundant. Remove null_fallback in favor of !net check. Signed-off-by: David Ahern <dsahern@gmail.com> Acked-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21ipv4: Allow amount of dirty memory from fib resizing to be controllableDavid Ahern4-6/+26
fib_trie implementation calls synchronize_rcu when a certain amount of pages are dirty from freed entries. The number of pages was determined experimentally in 2009 (commit c3059477fce2d). At the current setting, synchronize_rcu is called often -- 51 times in a second in one test with an average of an 8 msec delay adding a fib entry. The total impact is a lot of slow down modifying the fib. This is seen in the output of 'time' - the difference between real time and sys+user. For example, using 720,022 single path routes and 'ip -batch'[1]: $ time ./ip -batch ipv4/routes-1-hops real 0m14.214s user 0m2.513s sys 0m6.783s So roughly 35% of the actual time to install the routes is from the ip command getting scheduled out, most notably due to synchronize_rcu (this is observed using 'perf sched timehist'). This patch makes the amount of dirty memory configurable between 64k where the synchronize_rcu is called often (small, low end systems that are memory sensitive) to 64M where synchronize_rcu is called rarely during a large FIB change (for high end systems with lots of memory). The default is 512kB which corresponds to the current setting of 128 pages with a 4kB page size. As an example, at 16MB the worst interval shows 4 calls to synchronize_rcu in a second blocking for up to 30 msec in a single instance, and a total of almost 100 msec across the 4 calls in the second. The trade off is allowing FIB entries to consume more memory in a given time window but but with much better fib insertion rates (~30% increase in prefixes/sec). With this patch and net.ipv4.fib_sync_mem set to 16MB, the same batch file runs in: $ time ./ip -batch ipv4/routes-1-hops real 0m9.692s user 0m2.491s sys 0m6.769s So the dead time is reduced to about 1/2 second or <5% of the real time. [1] 'ip' modified to not request ACK messages which improves route insertion times by about 20% Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21tun: Remove unused first parameter of tun_get_iff()Kirill Tkhai1-4/+3
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21tun: Add ioctl() TUNGETDEVNETNS cmd to allow obtaining real net ns of tun deviceKirill Tkhai2-0/+9
In commit f2780d6d7475 "tun: Add ioctl() SIOCGSKNS cmd to allow obtaining net ns of tun device" it was missed that tun may change its net ns, while net ns of socket remains the same as it was created initially. SIOCGSKNS returns net ns of socket, so it is not suitable for obtaining net ns of device. We may have two tun devices with the same names in two net ns, and in this case it's not possible to determ, which of them fd refers to (TUNGETIFF will return the same name). This patch adds new ioctl() cmd for obtaining net ns of a device. Reported-by: Harald Albrecht <harald.albrecht@gmx.net> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-03-21Merge branch 'ipv6-Change-addrconf_f6i_alloc-to-use-ip6_route_info_create'David S. Miller2-31/+22
David Ahern says: ==================== ipv6: Change addrconf_f6i_alloc to use ip6_route_info_create addrconf_f6i_alloc is the last caller of fib6_info_alloc besides ip6_route_info_create. There really is no good reason for it do its own fib6_info initialization, so convert it to call ip6_route_info_create. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>