aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/receive.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* allowedips: rename from routingtableJason A. Donenfeld2017-11-101-1/+1
| | | | Makes it more clear that this _not_ a routing table replacement.
* receive: hoist fpu outside of receive loopJason A. Donenfeld2017-11-101-3/+6
|
* global: use fewer BUG_ONsJason A. Donenfeld2017-10-311-3/+3
| | | | Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* global: style nitsJason A. Donenfeld2017-10-311-3/+7
|
* global: accept decent check_patch.pl suggestionsJason A. Donenfeld2017-10-311-10/+12
|
* stats: more robust accountingJason A. Donenfeld2017-10-311-5/+7
|
* receive: improve control flowJason A. Donenfeld2017-10-171-4/+2
|
* receive: disable bh before using stats seq lockJason A. Donenfeld2017-10-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | Otherwise we might get a situation like this: CPU0 CPU1 ---- ---- lock(tstats lock); local_irq_disable(); lock(queue lock); lock(tstats lock); <Interrupt> lock(queue lock); CPU1 is waiting for CPU0 to release tstats lock. But CPU0, in the interrupt handler, is waiting for CPU1 to release queue lock. The solution is to disable interrupts on CPU0, so that this can't happen. Note that this only affects 32-bit, since u64_stats_update_begin nops out on native 64-bit platforms. Reported-by: René van Dorst <opensource@vdorst.com>
* socket: gcc inlining makes this fasterJason A. Donenfeld2017-10-061-10/+2
|
* receive: do not consider 0 jiffies as being setJason A. Donenfeld2017-10-061-4/+4
| | | | | | | | | This causes tests to fail if run within the first 5 minutes. We also move to jiffies 64, so that there's low chance of wrapping in case handshakes are spread far apart. Reported-by: René van Dorst <opensource@vdorst.com>
* queueing: move from ctx to cbJason A. Donenfeld2017-10-051-57/+51
|
* receive: do not store endpoint in ctxJason A. Donenfeld2017-10-051-5/+21
|
* queueing: use ptr_ring instead of linked listsJason A. Donenfeld2017-10-051-7/+17
|
* receive: we're not planning on turning that into a while loop nowJason A. Donenfeld2017-10-051-6/+5
|
* receive: use local keypair, not ctx keypair in error pathJason A. Donenfeld2017-10-031-1/+1
|
* global: add space around variable declarationsJason A. Donenfeld2017-10-031-0/+5
|
* receive: simplify message type validationJason A. Donenfeld2017-10-031-19/+33
|
* receive: do not consider netfilter drop a real dropJason A. Donenfeld2017-10-021-5/+3
|
* receive: mark function staticJason A. Donenfeld2017-09-261-1/+1
|
* queueing: rename cpumask functionJason A. Donenfeld2017-09-191-1/+1
|
* queueing: no need to memzero structJason A. Donenfeld2017-09-191-1/+2
|
* receive: use netif_receive_skb instead of netif_rxJason A. Donenfeld2017-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | netif_rx queues things up to a per-cpu backlog, whereas netif_receive_skb immediately delivers the packet to the underlying network device and mostly never fails. In the event where decrypting packets is actually happening faster than the networking subsystem receive them -- like with 65k packets with UDPv6 in `make test-qemu` -- then this backlog fills up and we wind up dropping some packets. This is fine and not all together terrible, but it does raise the question of why we bothered spending CPU cycles decrypting those packets if they were just going to be dropped anyway. So, moving from netif_rx to netif_receive_skb means that whatever time netif_receive_skb needs winds up slowing down the dequeuing of decryption packets, which in turn means the decryption receive queue fills up sooner, so that we drop packets before decryption, rather than after, thus saving precious CPU cycles. Potential downsides of this include not keeping the cache hot, or not inundating the network subsystem with as many packets per second as possible, but in preliminary benchmarks, no difference has yet been observed.
* queue: entirely rework parallel systemJason A. Donenfeld2017-09-181-14/+155
| | | | | | | | | | This removes our dependency on padata and moves to a different mode of multiprocessing that is more efficient. This began as Samuel Holland's GSoC project and was gradually reworked/redesigned/rebased into this present commit, which is a combination of his initial contribution and my subsequent rewriting and redesigning.
* send: no need to check for NULL since ref is validJason A. Donenfeld2017-09-161-1/+1
|
* noise: infer initiator or not from handshake stateJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* timers: rename confusingly named functions and variablesJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* receive: move lastminute guard into timer eventJason A. Donenfeld2017-08-041-3/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* recieve: pskb_trim already checks lengthJason A. Donenfeld2017-08-011-1/+1
|
* receive: single line if styleJason A. Donenfeld2017-08-011-2/+1
|
* recieve: cleanup variable usageJason A. Donenfeld2017-07-281-11/+7
|
* global: use pointer to net_deviceJason A. Donenfeld2017-07-201-15/+15
| | | | | | DaveM prefers it to be this way per [1]. [1] http://www.spinics.net/lists/netdev/msg443992.html
* receive: cleanup error handlersJason A. Donenfeld2017-06-291-21/+23
|
* receive: pull IP header into headJason A. Donenfeld2017-06-291-0/+4
|
* receive: fix off-by-one in packet length checkingJason A. Donenfeld2017-06-291-1/+1
| | | | | | | | | | | | This caused certain packets to be rejected that shouldn't be rejected, in the case of certain scatter-gather ethernet drivers doing GRO pulling right up to the UDP bounds but not beyond. This caused certain TCP connections to fail. Thanks very much to Reuben for providing access to the machine to debug this regression. Reported-by: Reuben Martin <reuben.m@gmail.com>
* global: cleanup IP header checkingJason A. Donenfeld2017-06-261-50/+16
| | | | This way is more correct and ensures we're within the skb head.
* receive: extend rate limiting to 1 second after under load detectionJason A. Donenfeld2017-06-241-0/+5
|
* receive: trim incoming packets to IP header lengthJason A. Donenfeld2017-06-011-0/+15
|
* timers: reset retry-attempt counter when not retryingJason A. Donenfeld2017-05-311-1/+1
|
* timers: the completion of a handshake also is on key confirmationJason A. Donenfeld2017-05-311-0/+1
|
* debug: print interface name in dmesgJason A. Donenfeld2017-05-311-23/+23
|
* handshake: process in parallelJason A. Donenfeld2017-05-301-9/+12
|
* receive: netif_rx consumesJason A. Donenfeld2017-04-091-1/+3
|
* data: cleanup parallel workqueue and use two max_activeJason A. Donenfeld2017-04-081-2/+2
|
* data: simplify flowJason A. Donenfeld2017-04-041-7/+2
|
* locking: always use _bhJason A. Donenfeld2017-04-041-3/+3
| | | | | All locks are potentially between user context and softirq, which means we need to take the _bh variant.
* data: big refactoringJason A. Donenfeld2017-03-201-42/+49
|
* receive: last_rx use is discouraged and removed in recent kernelsJason A. Donenfeld2017-02-271-1/+0
|
* Update copyrightJason A. Donenfeld2017-01-101-1/+1
|
* peer: don't use sockaddr_storage to reduce memory usageJason A. Donenfeld2016-12-131-7/+7
|
* receive: simplify ip header checking logicJason A. Donenfeld2016-12-111-15/+2
|