aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/receive.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* queueing: use ptr_ring instead of linked listsJason A. Donenfeld2017-10-051-7/+17
|
* receive: we're not planning on turning that into a while loop nowJason A. Donenfeld2017-10-051-6/+5
|
* receive: use local keypair, not ctx keypair in error pathJason A. Donenfeld2017-10-031-1/+1
|
* global: add space around variable declarationsJason A. Donenfeld2017-10-031-0/+5
|
* receive: simplify message type validationJason A. Donenfeld2017-10-031-19/+33
|
* receive: do not consider netfilter drop a real dropJason A. Donenfeld2017-10-021-5/+3
|
* receive: mark function staticJason A. Donenfeld2017-09-261-1/+1
|
* queueing: rename cpumask functionJason A. Donenfeld2017-09-191-1/+1
|
* queueing: no need to memzero structJason A. Donenfeld2017-09-191-1/+2
|
* receive: use netif_receive_skb instead of netif_rxJason A. Donenfeld2017-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | netif_rx queues things up to a per-cpu backlog, whereas netif_receive_skb immediately delivers the packet to the underlying network device and mostly never fails. In the event where decrypting packets is actually happening faster than the networking subsystem receive them -- like with 65k packets with UDPv6 in `make test-qemu` -- then this backlog fills up and we wind up dropping some packets. This is fine and not all together terrible, but it does raise the question of why we bothered spending CPU cycles decrypting those packets if they were just going to be dropped anyway. So, moving from netif_rx to netif_receive_skb means that whatever time netif_receive_skb needs winds up slowing down the dequeuing of decryption packets, which in turn means the decryption receive queue fills up sooner, so that we drop packets before decryption, rather than after, thus saving precious CPU cycles. Potential downsides of this include not keeping the cache hot, or not inundating the network subsystem with as many packets per second as possible, but in preliminary benchmarks, no difference has yet been observed.
* queue: entirely rework parallel systemJason A. Donenfeld2017-09-181-14/+155
| | | | | | | | | | This removes our dependency on padata and moves to a different mode of multiprocessing that is more efficient. This began as Samuel Holland's GSoC project and was gradually reworked/redesigned/rebased into this present commit, which is a combination of his initial contribution and my subsequent rewriting and redesigning.
* send: no need to check for NULL since ref is validJason A. Donenfeld2017-09-161-1/+1
|
* noise: infer initiator or not from handshake stateJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* timers: rename confusingly named functions and variablesJason A. Donenfeld2017-08-041-1/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* receive: move lastminute guard into timer eventJason A. Donenfeld2017-08-041-3/+1
| | | | Suggested-by: Mathias Hall-Andersen <mathias@hall-andersen.dk>
* recieve: pskb_trim already checks lengthJason A. Donenfeld2017-08-011-1/+1
|
* receive: single line if styleJason A. Donenfeld2017-08-011-2/+1
|
* recieve: cleanup variable usageJason A. Donenfeld2017-07-281-11/+7
|
* global: use pointer to net_deviceJason A. Donenfeld2017-07-201-15/+15
| | | | | | DaveM prefers it to be this way per [1]. [1] http://www.spinics.net/lists/netdev/msg443992.html
* receive: cleanup error handlersJason A. Donenfeld2017-06-291-21/+23
|
* receive: pull IP header into headJason A. Donenfeld2017-06-291-0/+4
|
* receive: fix off-by-one in packet length checkingJason A. Donenfeld2017-06-291-1/+1
| | | | | | | | | | | | This caused certain packets to be rejected that shouldn't be rejected, in the case of certain scatter-gather ethernet drivers doing GRO pulling right up to the UDP bounds but not beyond. This caused certain TCP connections to fail. Thanks very much to Reuben for providing access to the machine to debug this regression. Reported-by: Reuben Martin <reuben.m@gmail.com>
* global: cleanup IP header checkingJason A. Donenfeld2017-06-261-50/+16
| | | | This way is more correct and ensures we're within the skb head.
* receive: extend rate limiting to 1 second after under load detectionJason A. Donenfeld2017-06-241-0/+5
|
* receive: trim incoming packets to IP header lengthJason A. Donenfeld2017-06-011-0/+15
|
* timers: reset retry-attempt counter when not retryingJason A. Donenfeld2017-05-311-1/+1
|
* timers: the completion of a handshake also is on key confirmationJason A. Donenfeld2017-05-311-0/+1
|
* debug: print interface name in dmesgJason A. Donenfeld2017-05-311-23/+23
|
* handshake: process in parallelJason A. Donenfeld2017-05-301-9/+12
|
* receive: netif_rx consumesJason A. Donenfeld2017-04-091-1/+3
|
* data: cleanup parallel workqueue and use two max_activeJason A. Donenfeld2017-04-081-2/+2
|
* data: simplify flowJason A. Donenfeld2017-04-041-7/+2
|
* locking: always use _bhJason A. Donenfeld2017-04-041-3/+3
| | | | | All locks are potentially between user context and softirq, which means we need to take the _bh variant.
* data: big refactoringJason A. Donenfeld2017-03-201-42/+49
|
* receive: last_rx use is discouraged and removed in recent kernelsJason A. Donenfeld2017-02-271-1/+0
|
* Update copyrightJason A. Donenfeld2017-01-101-1/+1
|
* peer: don't use sockaddr_storage to reduce memory usageJason A. Donenfeld2016-12-131-7/+7
|
* receive: simplify ip header checking logicJason A. Donenfeld2016-12-111-15/+2
|
* headers: cleanup noticesJason A. Donenfeld2016-11-211-1/+1
|
* packets: consolidate constantsJason A. Donenfeld2016-11-161-3/+3
|
* various: nits from willyJason A. Donenfeld2016-11-151-1/+1
|
* debug: cleanup skb printingJason A. Donenfeld2016-11-151-42/+25
|
* socket: keep track of src address in sending packetsJason A. Donenfeld2016-11-151-42/+36
|
* send: simplify handshake initiation queueing and introduce lockJason A. Donenfeld2016-11-071-1/+1
|
* debug: support dynamic debug on skb addrJason A. Donenfeld2016-11-061-4/+4
|
* receive: always send confirmation, even if queue is emptyJason A. Donenfeld2016-10-191-1/+5
|
* timers: only have initiator rekeyJason A. Donenfeld2016-10-191-3/+26
| | | | | | | | | If it's time to rekey, and the responder sends a message, the initator will begin the rekeying when sending his response message. In the worst case, this response message will actually just be the keepalive. This generally works well, with the one edge case of the message arriving less than 10 seconds before key expiration, in which the keepalive is not sufficient. In this case, we simply rehandshake immediately.
* timers: always delay handshakes for responderJason A. Donenfeld2016-10-191-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the prior behavior, when sending a packet, we checked to see if it was about time to start a new handshake, and if we were past a certain time, we started it. For the responder, we made that time a bit further in the future than for the initiator, to prevent the thundering herd problem of them both starting at the same time. However, this was flawed. If both parties stopped communicating after 2.2 minutes, and then one party decided to initiate a TCP connection before the 3 minute mark, the currently open session would be used. However, because it was after the 2.2 minute mark, both peers would try to initiate a handshake upon sending their first packet. The errant flow was as follows: 1. Peer A sends SYN. 2. Peer A sees that his key is getting old and initiates new handshake. 3. Peer B receives SYN and sends ACK. 4. Peer B sees that his key is getting old and initiates new handshake. Since these events happened after the 2.2 minute mark, there's no delay between handshake initiations, and problems begin. The new behavior is changed to: 1. Peer A sends SYN. 2. Peer A sees that his key is getting old and initiates new handshake. 3. Peer B receives SYN and sends ACK. 4. Peer B sees that his key is getting old and schedules a delayed handshake for 12.5 seconds in the future. 5. Peer B receives handshake initiation and cancels scheduled handshake.
* debug: keep alive -> keepaliveJason A. Donenfeld2016-10-191-1/+1
|
* Rework headers and includesJason A. Donenfeld2016-09-291-2/+3
|