Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | data: reorganize and edit new queuing code | Jason A. Donenfeld | 2017-09-15 | 1 | -1/+1 |
| | | | | | This involves many changes of Samuel's new system, in addition to some TODOs for things that are not yet ideal. | ||||
* | queues: entirely rework parallel system | Samuel Holland | 2017-09-15 | 1 | -1/+1 |
| | | | | | | This removes our dependency on padata. Signed-off-by: Samuel Holland <samuel@sholland.org> | ||||
* | global: use pointer to net_device | Jason A. Donenfeld | 2017-07-20 | 1 | -6/+5 |
| | | | | | | DaveM prefers it to be this way per [1]. [1] http://www.spinics.net/lists/netdev/msg443992.html | ||||
* | random: wait for random bytes when generating nonces and ephemerals | Jason A. Donenfeld | 2017-06-12 | 1 | -5/+0 |
| | | | | | | | | | | | We can let userspace configure wireguard interfaces before the RNG is fully initialized, since what we mostly care about is having good randomness for ephemerals and xchacha nonces. By deferring the wait to actually asking for the randomness, we give a lot more opportunity for gathering entropy. This won't cover entropy for hash table secrets or cookie secrets (which rotate anyway), but those have far less catastrophic failure modes, so ensuring good randomness for elliptic curve points and nonces should be sufficient. | ||||
* | config: ensure the RNG is initialized before setting | Jason A. Donenfeld | 2017-06-08 | 1 | -0/+5 |
| | | | | | | | | It's possible that get_random_bytes() will return bad randomness if it hasn't been seeded. This patch makes configuration block until the RNG is properly initialized. Reference: http://www.openwall.com/lists/kernel-hardening/2017/06/02/2 | ||||
* | config: add new line for style | Jason A. Donenfeld | 2017-05-31 | 1 | -0/+1 |
| | |||||
* | config: it's faster to memcpy than strncpy | Jason A. Donenfeld | 2017-05-31 | 1 | -2/+1 |
| | | | | IFNAMSIZ is 16, so this is two instructions on 64-bit. | ||||
* | config: do not error out when getting if no peers | Jason A. Donenfeld | 2017-05-31 | 1 | -0/+1 |
| | |||||
* | peer: use iterator macro instead of callback | Jason A. Donenfeld | 2017-05-30 | 1 | -14/+18 |
| | |||||
* | noise: precompute static-static ECDH operation | Jason A. Donenfeld | 2017-05-30 | 1 | -1/+4 |
| | |||||
* | noise: redesign preshared key mode | Jason A. Donenfeld | 2017-05-17 | 1 | -13/+19 |
| | |||||
* | routingtable: rewrite core functions | Jason A. Donenfeld | 2017-04-21 | 1 | -15/+2 |
| | | | | | | | | | | | | | | | | | | | | | When removing by peer, prev needs to be set to *nptr in order to traverse that part of the trie. The other remove by IP function can simply be removed, as it's not in use. The root freeing function can use pre-order traversal instead of post-order. The pre-order traversal code in general is now a nice iterator macro. The common bits function can use the fast fls instructions and the match function can be rewritten to simply compare common bits. While we're at it, let's add tons of new tests, randomized checking against a dumb implementation, and graphviz output. And in general, it's nice to clean things up. | ||||
* | config: don't allow no-privatekey to mask preshared | Jason A. Donenfeld | 2017-04-21 | 1 | -1/+2 |
| | |||||
* | curve25519: protect against potential invalid point attacks | Jason A. Donenfeld | 2017-03-30 | 1 | -1/+1 |
| | |||||
* | config: do not allow peers with public keys the same as the interface | Jason A. Donenfeld | 2017-03-28 | 1 | -0/+20 |
| | |||||
* | uapi: add version magic | Jason A. Donenfeld | 2017-03-24 | 1 | -15/+25 |
| | |||||
* | config: satisfy sparse | Jason A. Donenfeld | 2017-03-19 | 1 | -1/+1 |
| | |||||
* | socket: enable setting of fwmark | Jason A. Donenfeld | 2017-02-13 | 1 | -0/+6 |
| | |||||
* | config: useless newline | Jason A. Donenfeld | 2017-01-12 | 1 | -2/+0 |
| | |||||
* | Update copyright | Jason A. Donenfeld | 2017-01-10 | 1 | -1/+1 |
| | |||||
* | uapi: use sockaddr union instead of sockaddr_storage | Jason A. Donenfeld | 2017-01-10 | 1 | -8/+5 |
| | |||||
* | uapi: use flag instead of C bitfield for portability | Jason A. Donenfeld | 2017-01-10 | 1 | -6/+6 |
| | |||||
* | cookies: use xchacha20poly1305 instead of chacha20poly1305 | Jason A. Donenfeld | 2016-12-23 | 1 | -4/+14 |
| | | | | | This allows us to precompute the blake2s calls and save cycles, since hchacha is fast. | ||||
* | config: allow removing multiple peers at once | Jason A. Donenfeld | 2016-12-23 | 1 | -1/+2 |
| | |||||
* | config: cleanups | Jason A. Donenfeld | 2016-12-16 | 1 | -33/+19 |
| | |||||
* | peer: don't use sockaddr_storage to reduce memory usage | Jason A. Donenfeld | 2016-12-13 | 1 | -3/+10 |
| | |||||
* | global: move to consistent use of uN instead of uintN_t for kernel code | Jason A. Donenfeld | 2016-12-11 | 1 | -5/+5 |
| | |||||
* | headers: cleanup notices | Jason A. Donenfeld | 2016-11-21 | 1 | -1/+1 |
| | |||||
* | socket: keep track of src address in sending packets | Jason A. Donenfeld | 2016-11-15 | 1 | -4/+6 |
| | |||||
* | socket: use dst_cache instead of handrolled cache | Jason A. Donenfeld | 2016-11-04 | 1 | -1/+1 |
| | |||||
* | timers: take reference like a lookup table | Jason A. Donenfeld | 2016-11-03 | 1 | -8/+1 |
| | |||||
* | Rework headers and includes | Jason A. Donenfeld | 2016-09-29 | 1 | -1/+0 |
| | |||||
* | persistent-keepalive: change range to [1,65535] | Jason A. Donenfeld | 2016-08-08 | 1 | -7/+4 |
| | |||||
* | timers: upstream removed the slack concept | Jason A. Donenfeld | 2016-07-23 | 1 | -5/+2 |
| | | | | | No longer do we specify slack ourselves. Instead we need to add it directly in the main scheduling. | ||||
* | timers: apply slack to hotpath timers | Jason A. Donenfeld | 2016-07-10 | 1 | -2/+5 |
| | | | | | | | | | | | For timers in the hotpath, we don't want them to be rescheduled so aggressively, and since they don't need to be that precise, we can set a decent amount of slack. With the persistent keepalive timer, we have something of a special case. Since the timeout isn't fixed like the others, we don't want to make it more often than the kernel ordinarily would. So, instead, we make it a minimum. | ||||
* | persistent keepalive: use unsigned long to avoid multiplication in hotpath | Jason A. Donenfeld | 2016-07-10 | 1 | -2/+2 |
| | |||||
* | persistent keepalive: use authenticated keepalives | Jason A. Donenfeld | 2016-07-10 | 1 | -1/+1 |
| | |||||
* | persistent keepalive: start sending immediatelyexperimental-0.0.20160708.1 | Jason A. Donenfeld | 2016-07-08 | 1 | -1/+4 |
| | | | | | | | | | | | | | | | Rather than only start sending the persistent keepalive packets when the device first sends data, this changes it to send the packets immediately on `ip link set up`. This makes things generally seem more stateless, since the administrator does not have to manually ping the endpoint. Of course, if you have a lot of peers and all of them have persistent keepalive enabled, this could cause a lot of unwanted immediate traffic. On the other hand, if all of those peers are at some point going to be sending packets, this would happen anyway. I suppose the moral of the story is that persistent keepalive is a feature really just for clients behind NAT, not for servers, and it should be used sparingly, which is why we've set it off by default in the first place. | ||||
* | persistent keepalive: add kernel mechanism | Jason A. Donenfeld | 2016-07-08 | 1 | -0/+8 |
| | |||||
* | Initial commit | Jason A. Donenfeld | 2016-06-25 | 1 | -0/+314 |