aboutsummaryrefslogtreecommitdiffstats
path: root/device (follow)
Commit message (Collapse)AuthorAgeFilesLines
* device: optimize message encodingAlexander Yastrebov2025-05-212-13/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimize message encoding by eliminating binary.Write (which internally uses reflection) in favour of hand-rolled encoding. This is companion to 9e7529c3d2d0c54f4d5384c01645a9279e4740ae. Synthetic benchmark: var packetSink []byte func BenchmarkMessageInitiationMarshal(b *testing.B) { var msg MessageInitiation b.Run("binary.Write", func(b *testing.B) { b.ReportAllocs() for range b.N { var buf [MessageInitiationSize]byte writer := bytes.NewBuffer(buf[:0]) _ = binary.Write(writer, binary.LittleEndian, msg) packetSink = writer.Bytes() } }) b.Run("binary.Encode", func(b *testing.B) { b.ReportAllocs() for range b.N { packet := make([]byte, MessageInitiationSize) _, _ = binary.Encode(packet, binary.LittleEndian, msg) packetSink = packet } }) b.Run("marshal", func(b *testing.B) { b.ReportAllocs() for range b.N { packet := make([]byte, MessageInitiationSize) _ = msg.marshal(packet) packetSink = packet } }) } Results: │ - │ │ sec/op │ MessageInitiationMarshal/binary.Write-8 1.337µ ± 0% MessageInitiationMarshal/binary.Encode-8 1.242µ ± 0% MessageInitiationMarshal/marshal-8 53.05n ± 1% │ - │ │ B/op │ MessageInitiationMarshal/binary.Write-8 368.0 ± 0% MessageInitiationMarshal/binary.Encode-8 160.0 ± 0% MessageInitiationMarshal/marshal-8 160.0 ± 0% │ - │ │ allocs/op │ MessageInitiationMarshal/binary.Write-8 3.000 ± 0% MessageInitiationMarshal/binary.Encode-8 1.000 ± 0% MessageInitiationMarshal/marshal-8 1.000 ± 0% Signed-off-by: Alexander Yastrebov <yastrebov.alex@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: add support for removing allowedips individuallyJason A. Donenfeld2025-05-203-34/+125
| | | | | | This pairs with the recent change in wireguard-tools. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: make unmarshall length checks exactJason A. Donenfeld2025-05-151-7/+7
| | | | | | | This is already enforced in receive.go, but if these unmarshallers are to have error return values anyway, make them as explicit as possible. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: reduce RoutineHandshake allocationsAlexander Yastrebov2025-05-152-7/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reduce allocations by eliminating byte reader, hand-rolled decoding and reusing message structs. Synthetic benchmark: var msgSink MessageInitiation func BenchmarkMessageInitiationUnmarshal(b *testing.B) { packet := make([]byte, MessageInitiationSize) reader := bytes.NewReader(packet) err := binary.Read(reader, binary.LittleEndian, &msgSink) if err != nil { b.Fatal(err) } b.Run("binary.Read", func(b *testing.B) { b.ReportAllocs() for range b.N { reader := bytes.NewReader(packet) _ = binary.Read(reader, binary.LittleEndian, &msgSink) } }) b.Run("unmarshal", func(b *testing.B) { b.ReportAllocs() for range b.N { _ = msgSink.unmarshal(packet) } }) } Results: │ - │ │ sec/op │ MessageInitiationUnmarshal/binary.Read-8 1.508µ ± 2% MessageInitiationUnmarshal/unmarshal-8 12.66n ± 2% │ - │ │ B/op │ MessageInitiationUnmarshal/binary.Read-8 208.0 ± 0% MessageInitiationUnmarshal/unmarshal-8 0.000 ± 0% │ - │ │ allocs/op │ MessageInitiationUnmarshal/binary.Read-8 2.000 ± 0% MessageInitiationUnmarshal/unmarshal-8 0.000 ± 0% Signed-off-by: Alexander Yastrebov <yastrebov.alex@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: use rand.NewSource instead of rand.SeedTom Holford2025-05-052-10/+10
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: replaced unused function params with _Tom Holford2025-05-053-4/+4
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright noticeJason A. Donenfeld2025-05-0536-36/+36
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix missed return of QueueOutboundElementsContainer to its WaitPoolJordan Whited2025-05-041-0/+1
| | | | | | | Fixes: 3bb8fec ("conn, device, tun: implement vectorized I/O plumbing") Reviewed-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix WaitPool sync.Cond usageJordan Whited2025-05-042-6/+9
| | | | | | | | | | | The sync.Locker used with a sync.Cond must be acquired when changing the associated condition, otherwise there is a window within sync.Cond.Wait() where a wake-up may be missed. Fixes: 4846070 ("device: use a waiting sync.Pool instead of a channel") Reviewed-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix possible deadlock in close methodMartin Basovnik2023-12-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a possible deadlock in `device.Close()` when you try to close the device very soon after its start. The problem is that two different methods acquire the same locks in different order: 1. device.Close() - device.ipcMutex.Lock() - device.state.Lock() 2. device.changeState(deviceState) - device.state.Lock() - device.ipcMutex.Lock() Reproducer: func TestDevice_deadlock(t *testing.T) { d := randDevice(t) d.Close() } Problem: $ go clean -testcache && go test -race -timeout 3s -run TestDevice_deadlock ./device | grep -A 10 sync.runtime_SemacquireMutex sync.runtime_SemacquireMutex(0xc000117d20?, 0x94?, 0x0?) /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25 sync.(*Mutex).lockSlow(0xc000130518) /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213 sync.(*Mutex).Lock(0xc000130518) /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55 golang.zx2c4.com/wireguard/device.(*Device).Close(0xc000130500) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:373 +0xb6 golang.zx2c4.com/wireguard/device.TestDevice_deadlock(0x0?) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device_test.go:480 +0x2c testing.tRunner(0xc00014c000, 0x131d7b0) -- sync.runtime_SemacquireMutex(0xc000130564?, 0x60?, 0xc000130548?) /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25 sync.(*Mutex).lockSlow(0xc000130750) /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213 sync.(*Mutex).Lock(0xc000130750) /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55 sync.(*RWMutex).Lock(0xc000130750) /usr/local/opt/go/libexec/src/sync/rwmutex.go:147 +0x45 golang.zx2c4.com/wireguard/device.(*Device).upLocked(0xc000130500) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:179 +0x72 golang.zx2c4.com/wireguard/device.(*Device).changeState(0xc000130500, 0x1) Signed-off-by: Martin Basovnik <martin.basovnik@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: do atomic 64-bit add outside of vector loopJason A. Donenfeld2023-12-111-1/+4
| | | | | | | Only bother updating the rxBytes counter once we've processed a whole vector, since additions are atomic. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: reduce redundant per-packet overhead in RX pathJordan Whited2023-12-111-6/+15
| | | | | | | | | | Peer.RoutineSequentialReceiver() deals with packet vectors and does not need to perform timer and endpoint operations for every packet in a given vector. Changing these per-packet operations to per-vector improves throughput by as much as 10% in some environments. Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: change Peer.endpoint locking to reduce contentionJordan Whited2023-12-116-84/+86
| | | | | | | | | | | | | | | | | | | | | | | | | Access to Peer.endpoint was previously synchronized by Peer.RWMutex. This has now moved to Peer.endpoint.Mutex. Peer.SendBuffers() is now the sole caller of Endpoint.ClearSrc(), which is signaled via a new bool, Peer.endpoint.clearSrcOnTx. Previous Callers of Endpoint.ClearSrc() now set this bool, primarily via peer.markEndpointSrcForClearing(). Peer.SetEndpointFromPacket() clears Peer.endpoint.clearSrcOnTx when an updated conn.Endpoint is stored. This maintains the same event order as before, i.e. a conn.Endpoint received after peer.endpoint.clearSrcOnTx is set, but before the next Peer.SendBuffers() call results in the latest conn.Endpoint source being used for the next packet transmission. These changes result in throughput improvements for single flow, parallel (-P n) flow, and bidirectional (--bidir) flow iperf3 TCP/UDP tests as measured on both Linux and Windows. Latency under load improves especially for high throughput Linux scenarios. These improvements are likely realized on all platforms to some degree, as the changes are not platform-specific. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: ratchet up max segment size on androidJason A. Donenfeld2023-10-221-1/+1
| | | | | | | | GRO requires big allocations to be efficient. This isn't great, as there might be Android memory usage issues. So we should revisit this commit. But at least it gets things working again. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: move Queue{In,Out}boundElement Mutex to container typeJordan Whited2023-10-106-111/+121
| | | | | | | | | | | Queue{In,Out}boundElement locking can contribute to significant overhead via sync.Mutex.lockSlow() in some environments. These types are passed throughout the device package as elements in a slice, so move the per-element Mutex to a container around the slice. Reviewed-by: Maisem Ali <maisem@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: distribute crypto work as slice of elementsJordan Whited2023-10-103-55/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After reducing UDP stack traversal overhead via GSO and GRO, runtime.chanrecv() began to account for a high percentage (20% in one environment) of perf samples during a throughput benchmark. The individual packet channel ops with the crypto goroutines was the primary contributor to this overhead. Updating these channels to pass vectors, which the device package already handles at its ends, reduced this overhead substantially, and improved throughput. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is with UDP GSO and GRO, and with single element channels. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver The second result is with channels updated to pass a slice of elements. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 sender [ 5] 0.00-10.04 sec 13.2 GBytes 11.3 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device: use UDP GSO and GRO on LinuxJordan Whited2023-10-101-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StdNetBind probes for UDP GSO and GRO support at runtime. UDP GSO is dependent on checksum offload support on the egress netdev. UDP GSO will be disabled in the event sendmmsg() returns EIO, which is a strong signal that the egress netdev does not support checksum offload. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is from commit 052af4a without UDP GSO or GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 3.01 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 sender [ 5] 0.00-10.04 sec 9.85 GBytes 8.42 Gbits/sec receiver The second result is with UDP GSO and GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: wait for and lock ipc operations during closeJames Tucker2023-06-271-0/+2
| | | | | | | | | If an IPC operation is in flight while close starts, it is possible for both processes to deadlock. Prevent this by taking the IPC lock at the start of close and for the duration. Signed-off-by: James Tucker <jftucker@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: disable sticky sockets on AndroidJason A. Donenfeld2023-03-231-0/+3
| | | | | | | | We can't have the netlink listener socket, so it's not possible to support it. Plus, android networking stack complexity makes it a bit tricky anyway, so best to leave it disabled. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: buff -> bufJason A. Donenfeld2023-03-134-48/+48
| | | | | | This always struck me as kind of weird and non-standard. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn: inch BatchSize toward being non-dynamicJason A. Donenfeld2023-03-102-2/+2
| | | | | | | | There's not really a use at the moment for making this configurable, and once bind_windows.go behaves like bind_std.go, we'll be able to use constants everywhere. So begin that simplification now. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: implement vectorized I/O on LinuxJordan Whited2023-03-103-9/+13
| | | | | | | | | | | | | | | | | | | | Implement TCP offloading via TSO and GRO for the Linux tun.Device, which is made possible by virtio extensions in the kernel's TUN driver. Delete conn.LinuxSocketEndpoint in favor of a collapsed conn.StdNetBind. conn.StdNetBind makes use of recvmmsg() and sendmmsg() on Linux. All platforms now fall under conn.StdNetBind, except for Windows, which remains in conn.WinRingBind, which still needs to be adjusted to handle multiple packets. Also refactor sticky sockets support to eventually be applicable on platforms other than just Linux. However Linux remains the sole platform that fully implements it for now. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: implement vectorized I/O plumbingJordan Whited2023-03-108-260/+562
| | | | | | | | | | | | | | | | | | | | Accept packet vectors for reading and writing in the tun.Device and conn.Bind interfaces, so that the internal plumbing between these interfaces now passes a vector of packets. Vectors move untouched between these interfaces, i.e. if 128 packets are received from conn.Bind.Read(), 128 packets are passed to tun.Device.Write(). There is no internal buffering. Currently, existing implementations are only adjusted to have vectors of length one. Subsequent patches will improve that. Also, as a related fixup, use the unix and windows packages rather than the syscall package when possible. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: uniformly check ECDH output for zerosJason A. Donenfeld2023-02-165-38/+45
| | | | | | | | For some reason, this was omitted for response messages. Reported-by: z <dzm@unexpl0.red> Fixes: 8c34c4c ("First set of code review patches") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2023-02-0736-36/+36
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: bump copyright yearJason A. Donenfeld2022-09-2036-36/+36
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use Go 1.19 and its atomic typesBrad Fitzpatrick2022-09-0415-234/+106
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* conn, device, tun: set CLOEXEC on fdsBrad Fitzpatrick2022-07-041-1/+1
| | | | | Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* all: use any in place of interface{}Josh Bleecher Snyder2022-03-164-15/+15
| | | | | | Enabled by using Go 1.18. A bit less verbose. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* all: update to Go 1.18Josh Bleecher Snyder2022-03-166-15/+9
| | | | | | | | | | Bump go.mod and README. Switch to upstream net/netip. Use strings.Cut. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
* global: apply gofumptJason A. Donenfeld2021-12-099-18/+9
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: handle peer post config on blank lineJason A. Donenfeld2021-11-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We missed a function exit point. This was exacerbated by e3134bf ("device: defer state machine transitions until configuration is complete"), but the bug existed prior. Minus provided the following useful reproducer script: #!/usr/bin/env bash set -eux make wireguard-go || exit 125 ip netns del test-ns || true ip netns add test-ns ip link add test-kernel type wireguard wg set test-kernel listen-port 0 private-key <(echo "QMCfZcp1KU27kEkpcMCgASEjDnDZDYsfMLHPed7+538=") peer "eDPZJMdfnb8ZcA/VSUnLZvLB2k8HVH12ufCGa7Z7rHI=" allowed-ips 10.51.234.10/32 ip link set test-kernel netns test-ns up ip -n test-ns addr add 10.51.234.1/24 dev test-kernel port=$(ip netns exec test-ns wg show test-kernel listen-port) ip link del test-go || true ./wireguard-go test-go wg set test-go private-key <(echo "WBM7qimR3vFk1QtWNfH+F4ggy/hmO+5hfIHKxxI4nF4=") peer "+nj9Dkqpl4phsHo2dQliGm5aEiWJJgBtYKbh7XjeNjg=" allowed-ips 0.0.0.0/0 endpoint 127.0.0.1:$port ip addr add 10.51.234.10/24 dev test-go ip link set test-go up ping -c2 -W1 10.51.234.1 Reported-by: minus <minus@mnus.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: reduce peer lock critical section in UAPIJosh Bleecher Snyder2021-11-231-26/+28
| | | | | | | | | The deferred RUnlock calls weren't executing until all peers had been processed. Add an anonymous function so that each peer may be unlocked as soon as it is completed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove code using unsafeJosh Bleecher Snyder2021-11-231-33/+13
| | | | | | | | | | | | | There is no performance impact. name old time/op new time/op delta TrieIPv4Peers100Addresses1000-8 78.6ns ± 1% 79.4ns ± 3% ~ (p=0.604 n=10+9) TrieIPv4Peers10Addresses10-8 29.1ns ± 2% 28.8ns ± 1% -1.12% (p=0.014 n=10+9) TrieIPv6Peers100Addresses1000-8 78.9ns ± 1% 78.6ns ± 1% ~ (p=0.492 n=10+10) TrieIPv6Peers10Addresses10-8 29.3ns ± 2% 28.6ns ± 2% -2.16% (p=0.000 n=10+10) Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: use netip where possible nowJason A. Donenfeld2021-11-237-54/+57
| | | | | | | | There are more places where we'll need to add it later, when Go 1.18 comes out with support for it in the "net" package. Also, allowedips still uses slices internally, which might be suboptimal. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: only propagate roaming value before peer is referenced elsewhereJason A. Donenfeld2021-11-161-1/+3
| | | | | | | | | A peer.endpoint never becomes nil after being not-nil, so creation is the only time we actually need to set this. This prevents a race from when the variable is actually used elsewhere, and allows us to avoid an expensive atomic. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: align 64-bit atomic member in DeviceJason A. Donenfeld2021-11-161-5/+6
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: start peers before running handshake testJason A. Donenfeld2021-11-161-0/+2
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: fix nil pointer dereference in uapi readDavid Anderson2021-11-161-2/+2
| | | | | Signed-off-by: David Anderson <danderson@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: make new peers inherit broken mobile semanticsJason A. Donenfeld2021-11-153-0/+5
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: defer state machine transitions until configuration is completeJason A. Donenfeld2021-11-153-15/+18
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: do not consume handshake messages if not runningJason A. Donenfeld2021-11-151-1/+1
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: timers: use pre-seeded per-thread unlocked fastrandn for jitterJason A. Donenfeld2021-10-281-10/+5
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: timers: seed unsafe rng before use for jitterJason A. Donenfeld2021-10-281-3/+11
| | | | | | | Forgetting to seed the unsafe rng, the jitter before followed a fixed pattern, which didn't help when a fleet of computers all boot at once. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: remove old-style build tagsJason A. Donenfeld2021-10-125-5/+0
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: add new go 1.17 build commentsJason A. Donenfeld2021-09-055-2/+7
| | | | Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: zero out allowedip node pointers when removingJason A. Donenfeld2021-06-042-1/+22
| | | | | | This should make it a bit easier for the garbage collector. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: limit allowedip fuzzer a to 4 times throughJason A. Donenfeld2021-06-031-5/+10
| | | | | | | Trying this for every peer winds up being very slow and precludes it from acceptable runtime in the CI, so reduce this to 4. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: simplify allowedips lookup signatureJason A. Donenfeld2021-06-035-17/+18
| | | | | | The inliner should handle this for us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* device: remove nodes by peer in O(1) instead of O(n)Jason A. Donenfeld2021-06-032-72/+82
| | | | | | | | Now that we have parent pointers hooked up, we can simply go right to the node and remove it in place, rather than having to recursively walk the entire trie. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>