| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
A mailing list interlocutor argues that sharing the same macro name
might lead to errors down the road.
Suggested-by: Andrew Lunn <andrew@lunn.ch>
|
|
|
|
|
| |
Completely rework peer removal to ensure peers don't jump between
contexts and create races.
|
| |
|
|
|
|
|
|
| |
And in general it's good to prefer dereferencing entry.peer from a
handshake object rather than a keypair object, when possible, since
keypairs could disappear before their underlying peer.
|
|
|
|
|
|
|
| |
We don't want a consumer to read plaintext when it's supposed to be
reading ciphertext, which means we need to synchronize across cores.
Suggested-by: Jann Horn <jann@thejh.net>
|
| |
|
|
|
|
| |
And in general tighten up the logic of peer creation.
|
|
|
|
|
|
|
|
| |
After we atomic_set, the peer is allowed to be freed, which means if we
want to continue to reference it, we need to bump the reference count.
This was introduced a few commits ago by b713ab0e when implementing some
simplification suggestions.
|
|
|
|
|
|
| |
This reduces the amount of call_rcu invocations considerably.
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
|
|
|
|
| |
Suggested-by: Jann Horn <jann@thejh.net>
|
|
|
|
|
|
|
| |
If a peer is removed, it's possible for a lookup to momentarily return
NULL, resulting in needless -ENOKEY returns.
Signed-off-by: Jann Horn <jannh@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Blocks like:
if (node_placement(*trie, key, cidr, bits, &node, lock)) {
node->peer = peer;
return 0;
}
May result in a double read when adjusting the refcount, in the highly
unlikely case of LTO and an overly smart compiler.
While we're at it, replace rcu_assign_pointer(X, NULL); with
RCU_INIT_POINTER.
Reported-by: Jann Horn <jann@thejh.net>
|
| |
|
| |
|
|
|
|
| |
Suggested-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
|
|
|
|
| |
Suggested-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
|
| |
|
|
|
|
| |
docs/protocol.md hasn't existed for 3 years.
|
|
|
|
| |
Reported-by: Jann Horn <jann@thejh.net>
|
|
|
|
|
|
|
| |
Use RCU reference counts only when we must, and otherwise use a more
reasonably named function.
Reported-by: Jann Horn <jann@thejh.net>
|
|
|
|
|
|
|
| |
Fixes a classic ABA problem that isn't actually reachable because of
rtnl_lock, but it's good to be correct anyway.
Reported-by: Jann Horn <jann@thejh.net>
|
| |
|
|
|
|
|
|
|
|
|
| |
At this stage the value if C[4] is at most ((2^256-1) + 38*(2^256-1)) / 2^256 = 38,
so there is no need to use a wide multiplication.
Change inspired by Andy Polyakov's OpenSSL implementation.
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correctness can be quickly verified with the following z3py script:
>>> from z3 import *
>>> x = BitVec("x", 256) # any 256-bit value
>>> ref = URem(x, 2**255 - 19) # correct value
>>> t = Extract(255, 255, x); x &= 2**255 - 1; # btrq $63, %3
>>> u = If(t != 0, BitVecVal(38, 256), BitVecVal(19, 256)) # cmovncl %k5, %k4
>>> x += u # addq %4, %0; adcq $0, %1; adcq $0, %2; adcq $0, %3;
>>> t = Extract(255, 255, x); x &= 2**255 - 1; # btrq $63, %3
>>> u = If(t != 0, BitVecVal(0, 256), BitVecVal(19, 256)) # cmovncl %k5, %k4
>>> x -= u # subq %4, %0; sbbq $0, %1; sbbq $0, %2; sbbq $0, %3;
>>> prove(x == ref)
proved
Change inspired by Andy Polyakov's OpenSSL implementation.
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
|
|
|
|
|
|
|
|
|
|
| |
The wide multiplication by 38 in mul_a24_eltfp25519_1w is redundant:
(2^256-1) * 121666 / 2^256 is at most 121665, and therefore a 64-bit
multiplication can never overflow.
Change inspired by Andy Polyakov's OpenSSL implementation.
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Otherwise we incur undefined behavior.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids adding one reference per peer to the napi_hash hashtable, as
normally done by netif_napi_add(). Since we potentially could have up to
2^20 peers this would make busy polling very slow globally.
This approach is preferable to having only a single napi struct because
we get one gro_list per peer, which means packets can be combined nicely
even if we have a large number of peers.
This is also done by gro_cells_init() in net/core/gro_cells.c .
Signed-off-by: Thomas Gschwantner <tharre3@gmail.com>
|
| |
|
|
|
|
|
|
|
| |
It's unclear why it was like this in the first place, but it apparently
broke certain IPv6 setups.
Reported-by: Jonas Blahut <j@die-blahuts.de>
|
| |
|
|
|
|
| |
Suggested-by: Thomas Gschwantner <tharre3@gmail.com>
|
| |
|
| |
|
|
|
|
|
|
| |
Suggested-by: Jason A. Donenfeld <Jason@zx2c4.com>
[Jason: fixed up the flushing of the rx_queue in peer_remove]
Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
|
|
|
|
|
|
|
| |
If KERNEL_VERSION ends in -debug, then automatically set DEBUG_KERNEL
If DEBUG_KERNEL is set, now the debug kernel will be built in a
separate directory from the normal kernel, so that it's easy to toggle
back and forth.
|
|
|
|
|
| |
This fixes DEBUG_KERNEL=yes due to
dd275caf4a0d9b219fffe49288b6cc33cd564312 being backported to 4.17.4.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This is needed for frankenkernels, like android-common.
|
|
|
|
| |
Generally if we're inaccurate by a few nanoseconds, it doesn't matter.
|