aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/receive.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* crypto: import zincJason A. Donenfeld2018-09-031-1/+1
|
* global: run through clang-formatJason A. Donenfeld2018-08-281-79/+178
| | | | | | | This is the worst commit in the whole repo, making the code much less readable, but so it goes with upstream maintainers. We are now woefully wrapped at 80 columns.
* crypto: move simd context to specific typeJason A. Donenfeld2018-08-061-6/+6
| | | | Suggested-by: Andy Lutomirski <luto@kernel.org>
* peer: ensure destruction doesn't raceJason A. Donenfeld2018-08-031-21/+20
| | | | | Completely rework peer removal to ensure peers don't jump between contexts and create races.
* queueing: ensure strictly ordered loads and storesJason A. Donenfeld2018-08-021-1/+1
| | | | | | | We don't want a consumer to read plaintext when it's supposed to be reading ciphertext, which means we need to synchronize across cores. Suggested-by: Jann Horn <jann@thejh.net>
* peer: simplify rcu reference countsJason A. Donenfeld2018-07-311-1/+1
| | | | | | | Use RCU reference counts only when we must, and otherwise use a more reasonably named function. Reported-by: Jann Horn <jann@thejh.net>
* receive: check against proper return value typeJason A. Donenfeld2018-07-241-1/+1
|
* receive: use gro call instead of plain callJason A. Donenfeld2018-07-121-1/+1
|
* receive: account for zero or negative budgetJason A. Donenfeld2018-07-111-0/+3
| | | | Suggested-by: Thomas Gschwantner <tharre3@gmail.com>
* receive: use NAPI on the receive pathJonathan Neuschäfer2018-07-081-6/+13
| | | | | | Suggested-by: Jason A. Donenfeld <Jason@zx2c4.com> [Jason: fixed up the flushing of the rx_queue in peer_remove] Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
* receive: styleJason A. Donenfeld2018-07-041-1/+1
|
* global: use fast boottime instead of normal boottimeJason A. Donenfeld2018-06-231-3/+3
| | | | Generally if we're inaccurate by a few nanoseconds, it doesn't matter.
* global: use ktime boottime instead of jiffiesJason A. Donenfeld2018-06-231-6/+6
| | | | | | | | Since this is a network protocol, expirations need to be accounted for, even across system suspend. On real systems, this isn't a problem, since we're clearing all keys before suspend. But on Android, where we don't do that, this is something of a problem. So, we switch to using boottime instead of jiffies.
* receive: don't toggle bhJason A. Donenfeld2018-06-221-6/+0
| | | | | | | This had a bad performance impact. We'll probably need to revisit this later, but for now, let's not introduce a regression. Reported-by: Lonnie Abelbeck <lonnie@abelbeck.com>
* receive: drop handshake packets if rng is not initializedJason A. Donenfeld2018-06-191-2/+2
| | | | Otherwise it's too easy to trigger cookie reply messages.
* simd: encapsulate fpu amortization into nice functionsJason A. Donenfeld2018-06-171-9/+4
|
* queueing: re-enable preemption periodically to lower latencyJason A. Donenfeld2018-06-161-0/+12
|
* queueing: remove useless spinlocks on scJason A. Donenfeld2018-06-161-2/+0
| | | | Since these are the only consumers, there's no need for locking.
* global: year bumpJason A. Donenfeld2018-01-031-1/+1
|
* receive: treat packet checking as irrelevant for timersJason A. Donenfeld2018-01-031-6/+6
| | | | | | | Receiving any type of authenticated data is a receive and a traversal. When it isn't a keepalive it's a data. That's our rule. Whether or not it's the correct type of data or has the right IP header shouldn't influence timer decisions.
* global: add SPDX tags to all filesGreg Kroah-Hartman2017-12-091-1/+4
| | | | | | | | | | | | | It's good to have SPDX identifiers in all files as the Linux kernel developers are working to add these identifiers to all files. Update all files with the correct SPDX license identifier based on the license text of the project or based on the license in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Modified-by: Jason A. Donenfeld <Jason@zx2c4.com>
* allowedips: rename from routingtableJason A. Donenfeld2017-11-101-1/+1
| | | | Makes it more clear that this _not_ a routing table replacement.
* receive: hoist fpu outside of receive loopJason A. Donenfeld2017-11-101-3/+6
|
* global: use fewer BUG_ONsJason A. Donenfeld2017-10-311-3/+3
| | | | Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* global: style nitsJason A. Donenfeld2017-10-311-3/+7
|
* global: accept decent check_patch.pl suggestionsJason A. Donenfeld2017-10-311-10/+12
|
* stats: more robust accountingJason A. Donenfeld2017-10-311-5/+7
|
* receive: improve control flowJason A. Donenfeld2017-10-171-4/+2
|
* receive: disable bh before using stats seq lockJason A. Donenfeld2017-10-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | Otherwise we might get a situation like this: CPU0 CPU1 ---- ---- lock(tstats lock); local_irq_disable(); lock(queue lock); lock(tstats lock); <Interrupt> lock(queue lock); CPU1 is waiting for CPU0 to release tstats lock. But CPU0, in the interrupt handler, is waiting for CPU1 to release queue lock. The solution is to disable interrupts on CPU0, so that this can't happen. Note that this only affects 32-bit, since u64_stats_update_begin nops out on native 64-bit platforms. Reported-by: René van Dorst <opensource@vdorst.com>
* socket: gcc inlining makes this fasterJason A. Donenfeld2017-10-061-10/+2
|
* receive: do not consider 0 jiffies as being setJason A. Donenfeld2017-10-061-4/+4
| | | | | | | | | This causes tests to fail if run within the first 5 minutes. We also move to jiffies 64, so that there's low chance of wrapping in case handshakes are spread far apart. Reported-by: René van Dorst <opensource@vdorst.com>
* queueing: move from ctx to cbJason A. Donenfeld2017-10-051-57/+51
|
* receive: do not store endpoint in ctxJason A. Donenfeld2017-10-051-5/+21
|
* queueing: use ptr_ring instead of linked listsJason A. Donenfeld2017-10-051-7/+17
|
* receive: we're not planning on turning that into a while loop nowJason A. Donenfeld2017-10-051-6/+5
|
* receive: use local keypair, not ctx keypair in error pathJason A. Donenfeld2017-10-031-1/+1
|
* global: add space around variable declarationsJason A. Donenfeld2017-10-031-0/+5
|
* receive: simplify message type validationJason A. Donenfeld2017-10-031-19/+33
|
* receive: do not consider netfilter drop a real dropJason A. Donenfeld2017-10-021-5/+3
|
* receive: mark function staticJason A. Donenfeld2017-09-261-1/+1
|
* queueing: rename cpumask functionJason A. Donenfeld2017-09-191-1/+1
|
* queueing: no need to memzero structJason A. Donenfeld2017-09-191-1/+2
|
* receive: use netif_receive_skb instead of netif_rxJason A. Donenfeld2017-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | netif_rx queues things up to a per-cpu backlog, whereas netif_receive_skb immediately delivers the packet to the underlying network device and mostly never fails. In the event where decrypting packets is actually happening faster than the networking subsystem receive them -- like with 65k packets with UDPv6 in `make test-qemu` -- then this backlog fills up and we wind up dropping some packets. This is fine and not all together terrible, but it does raise the question of why we bothered spending CPU cycles decrypting those packets if they were just going to be dropped anyway. So, moving from netif_rx to netif_receive_skb means that whatever time netif_receive_skb needs winds up slowing down the dequeuing of decryption packets, which in turn means the decryption receive queue fills up sooner, so that we drop packets before decryption, rather than after, thus saving precious CPU cycles. Potential downsides of this include not keeping the cache hot, or not inundating the network subsystem with as many packets per second as possible, but in preliminary benchmarks, no difference has yet been observed.
* queue: entirely rework parallel systemJason A. Donenfeld2017-09-181-14/+155
| | | | | | | | | | This removes our dependency on padata and moves to a different mode of multiprocessing that is more efficient. This began as Samuel Holland's GSoC project and was gradually reworked/redesigned/rebased into this present commit, which is a combination of his initial contribution and my subsequent rewriting and redesigning.
* send: no need to check for NULL since ref is validJason A. Donenfeld2017-09-161-1/+1
|
* noise: infer initiator or not from handshake stateJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* timers: rename confusingly named functions and variablesJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* receive: move lastminute guard into timer eventJason A. Donenfeld2017-08-041-3/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* recieve: pskb_trim already checks lengthJason A. Donenfeld2017-08-011-1/+1
|
* receive: single line if styleJason A. Donenfeld2017-08-011-2/+1
|