Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | allowedips: rename from routingtable | Jason A. Donenfeld | 2017-11-10 | 1 | -1/+1 |
| | | | | Makes it more clear that this _not_ a routing table replacement. | ||||
* | receive: hoist fpu outside of receive loop | Jason A. Donenfeld | 2017-11-10 | 1 | -3/+6 |
| | |||||
* | global: use fewer BUG_ONs | Jason A. Donenfeld | 2017-10-31 | 1 | -3/+3 |
| | | | | Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | ||||
* | global: style nits | Jason A. Donenfeld | 2017-10-31 | 1 | -3/+7 |
| | |||||
* | global: accept decent check_patch.pl suggestions | Jason A. Donenfeld | 2017-10-31 | 1 | -10/+12 |
| | |||||
* | stats: more robust accounting | Jason A. Donenfeld | 2017-10-31 | 1 | -5/+7 |
| | |||||
* | receive: improve control flow | Jason A. Donenfeld | 2017-10-17 | 1 | -4/+2 |
| | |||||
* | receive: disable bh before using stats seq lock | Jason A. Donenfeld | 2017-10-11 | 1 | -0/+4 |
| | | | | | | | | | | | | | | | | | | | | | | Otherwise we might get a situation like this: CPU0 CPU1 ---- ---- lock(tstats lock); local_irq_disable(); lock(queue lock); lock(tstats lock); <Interrupt> lock(queue lock); CPU1 is waiting for CPU0 to release tstats lock. But CPU0, in the interrupt handler, is waiting for CPU1 to release queue lock. The solution is to disable interrupts on CPU0, so that this can't happen. Note that this only affects 32-bit, since u64_stats_update_begin nops out on native 64-bit platforms. Reported-by: René van Dorst <opensource@vdorst.com> | ||||
* | socket: gcc inlining makes this faster | Jason A. Donenfeld | 2017-10-06 | 1 | -10/+2 |
| | |||||
* | receive: do not consider 0 jiffies as being set | Jason A. Donenfeld | 2017-10-06 | 1 | -4/+4 |
| | | | | | | | | | This causes tests to fail if run within the first 5 minutes. We also move to jiffies 64, so that there's low chance of wrapping in case handshakes are spread far apart. Reported-by: René van Dorst <opensource@vdorst.com> | ||||
* | queueing: move from ctx to cb | Jason A. Donenfeld | 2017-10-05 | 1 | -57/+51 |
| | |||||
* | receive: do not store endpoint in ctx | Jason A. Donenfeld | 2017-10-05 | 1 | -5/+21 |
| | |||||
* | queueing: use ptr_ring instead of linked lists | Jason A. Donenfeld | 2017-10-05 | 1 | -7/+17 |
| | |||||
* | receive: we're not planning on turning that into a while loop now | Jason A. Donenfeld | 2017-10-05 | 1 | -6/+5 |
| | |||||
* | receive: use local keypair, not ctx keypair in error path | Jason A. Donenfeld | 2017-10-03 | 1 | -1/+1 |
| | |||||
* | global: add space around variable declarations | Jason A. Donenfeld | 2017-10-03 | 1 | -0/+5 |
| | |||||
* | receive: simplify message type validation | Jason A. Donenfeld | 2017-10-03 | 1 | -19/+33 |
| | |||||
* | receive: do not consider netfilter drop a real drop | Jason A. Donenfeld | 2017-10-02 | 1 | -5/+3 |
| | |||||
* | receive: mark function static | Jason A. Donenfeld | 2017-09-26 | 1 | -1/+1 |
| | |||||
* | queueing: rename cpumask function | Jason A. Donenfeld | 2017-09-19 | 1 | -1/+1 |
| | |||||
* | queueing: no need to memzero struct | Jason A. Donenfeld | 2017-09-19 | 1 | -1/+2 |
| | |||||
* | receive: use netif_receive_skb instead of netif_rx | Jason A. Donenfeld | 2017-09-19 | 1 | -1/+1 |
| | | | | | | | | | | | | | | | | | | | | | | netif_rx queues things up to a per-cpu backlog, whereas netif_receive_skb immediately delivers the packet to the underlying network device and mostly never fails. In the event where decrypting packets is actually happening faster than the networking subsystem receive them -- like with 65k packets with UDPv6 in `make test-qemu` -- then this backlog fills up and we wind up dropping some packets. This is fine and not all together terrible, but it does raise the question of why we bothered spending CPU cycles decrypting those packets if they were just going to be dropped anyway. So, moving from netif_rx to netif_receive_skb means that whatever time netif_receive_skb needs winds up slowing down the dequeuing of decryption packets, which in turn means the decryption receive queue fills up sooner, so that we drop packets before decryption, rather than after, thus saving precious CPU cycles. Potential downsides of this include not keeping the cache hot, or not inundating the network subsystem with as many packets per second as possible, but in preliminary benchmarks, no difference has yet been observed. | ||||
* | queue: entirely rework parallel system | Jason A. Donenfeld | 2017-09-18 | 1 | -14/+155 |
| | | | | | | | | | | This removes our dependency on padata and moves to a different mode of multiprocessing that is more efficient. This began as Samuel Holland's GSoC project and was gradually reworked/redesigned/rebased into this present commit, which is a combination of his initial contribution and my subsequent rewriting and redesigning. | ||||
* | send: no need to check for NULL since ref is valid | Jason A. Donenfeld | 2017-09-16 | 1 | -1/+1 |
| | |||||
* | noise: infer initiator or not from handshake state | Jason A. Donenfeld | 2017-08-04 | 1 | -1/+1 |
| | | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk> | ||||
* | timers: rename confusingly named functions and variables | Jason A. Donenfeld | 2017-08-04 | 1 | -1/+1 |
| | | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk> | ||||
* | receive: move lastminute guard into timer event | Jason A. Donenfeld | 2017-08-04 | 1 | -3/+1 |
| | | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk> | ||||
* | recieve: pskb_trim already checks length | Jason A. Donenfeld | 2017-08-01 | 1 | -1/+1 |
| | |||||
* | receive: single line if style | Jason A. Donenfeld | 2017-08-01 | 1 | -2/+1 |
| | |||||
* | recieve: cleanup variable usage | Jason A. Donenfeld | 2017-07-28 | 1 | -11/+7 |
| | |||||
* | global: use pointer to net_device | Jason A. Donenfeld | 2017-07-20 | 1 | -15/+15 |
| | | | | | | DaveM prefers it to be this way per [1]. [1] http://www.spinics.net/lists/netdev/msg443992.html | ||||
* | receive: cleanup error handlers | Jason A. Donenfeld | 2017-06-29 | 1 | -21/+23 |
| | |||||
* | receive: pull IP header into head | Jason A. Donenfeld | 2017-06-29 | 1 | -0/+4 |
| | |||||
* | receive: fix off-by-one in packet length checking | Jason A. Donenfeld | 2017-06-29 | 1 | -1/+1 |
| | | | | | | | | | | | | This caused certain packets to be rejected that shouldn't be rejected, in the case of certain scatter-gather ethernet drivers doing GRO pulling right up to the UDP bounds but not beyond. This caused certain TCP connections to fail. Thanks very much to Reuben for providing access to the machine to debug this regression. Reported-by: Reuben Martin <reuben.m@gmail.com> | ||||
* | global: cleanup IP header checking | Jason A. Donenfeld | 2017-06-26 | 1 | -50/+16 |
| | | | | This way is more correct and ensures we're within the skb head. | ||||
* | receive: extend rate limiting to 1 second after under load detection | Jason A. Donenfeld | 2017-06-24 | 1 | -0/+5 |
| | |||||
* | receive: trim incoming packets to IP header length | Jason A. Donenfeld | 2017-06-01 | 1 | -0/+15 |
| | |||||
* | timers: reset retry-attempt counter when not retrying | Jason A. Donenfeld | 2017-05-31 | 1 | -1/+1 |
| | |||||
* | timers: the completion of a handshake also is on key confirmation | Jason A. Donenfeld | 2017-05-31 | 1 | -0/+1 |
| | |||||
* | debug: print interface name in dmesg | Jason A. Donenfeld | 2017-05-31 | 1 | -23/+23 |
| | |||||
* | handshake: process in parallel | Jason A. Donenfeld | 2017-05-30 | 1 | -9/+12 |
| | |||||
* | receive: netif_rx consumes | Jason A. Donenfeld | 2017-04-09 | 1 | -1/+3 |
| | |||||
* | data: cleanup parallel workqueue and use two max_active | Jason A. Donenfeld | 2017-04-08 | 1 | -2/+2 |
| | |||||
* | data: simplify flow | Jason A. Donenfeld | 2017-04-04 | 1 | -7/+2 |
| | |||||
* | locking: always use _bh | Jason A. Donenfeld | 2017-04-04 | 1 | -3/+3 |
| | | | | | All locks are potentially between user context and softirq, which means we need to take the _bh variant. | ||||
* | data: big refactoring | Jason A. Donenfeld | 2017-03-20 | 1 | -42/+49 |
| | |||||
* | receive: last_rx use is discouraged and removed in recent kernels | Jason A. Donenfeld | 2017-02-27 | 1 | -1/+0 |
| | |||||
* | Update copyright | Jason A. Donenfeld | 2017-01-10 | 1 | -1/+1 |
| | |||||
* | peer: don't use sockaddr_storage to reduce memory usage | Jason A. Donenfeld | 2016-12-13 | 1 | -7/+7 |
| | |||||
* | receive: simplify ip header checking logic | Jason A. Donenfeld | 2016-12-11 | 1 | -15/+2 |
| |