| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can simplify the ratelimit init/deinit calls by allocating the table
statically, that is by not using hashinit_flags. That function ended up
doing some unnecessary calculation and meant that the mask couldn't be
constant.
By increasing the size of struct ratelimit, this also caught a nasty
(but benign) bug, where ratelimit_pool was initialised to allocate
sizeof(struct ratelimit) and not sizeof(struct ratelimit_entry). It has
been this way since FreeBSD tree and I didn't pick up on it while moving
the uma_zcreate call to wg_cookie.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The two main changes here are:
* Remove cookie_ prefix from static functions. This is a leftover from
OpenBSD where they don't want static functions.
* Rename cm to macs, and cp to cm. Not sure where this came from but it
didn't really make much sense to leave it as is.
The reset are whitespace changes. Overall there is no modification to
functionality here, just appearances.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Primarily this commit adds a cookie_valid state, to prevent a recently
booted machine from sending a mac2. We also do a little bit of reworking
on locking and a fixup for int to bool.
There is one slight difference to cookie_valid (latest_cookie.is_valid)
on Linux and that is to set cookie_valid to false when the
cookie_birthdate has expired. The purpose of this is to prevent the
expensive timer check after it has expired.
For the locking, we want to hold a write lock in cookie_maker_mac
because we write to mac1_last, mac1_valid and cookie_valid. This
wouldn't cause too much contention as this is a per peer lock and we
only do so when sending handshake packets. This is different from Linux
as Linux writes all it's variables at the start, then downgrades to a
read lock.
We also match cookie_maker_consume_payload locking to Linux, that is to
read lock while checking mac1_valid and decrypting the cookie then take
a write lock to set the cookie.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While it was nice to have per peer loop detection, it was not meant to
be. The loop tag has a tag type == 0, which conflicts with other tags.
Therefore we want to at least be a little bit more sure that the tag
cookie is unique to the loop tag. I guess the peer address was also
quite hacky so on the other side, I'm glad to be rid of that.
Now we have a loop of 8 (to any peer) which should be good enough for an
edge case operation.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
| |
Also remove the stale entry from the TODO list.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
| |
And then fix broken allowedips implementation for the static unit tests
to pass.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we relied on gc being called when adding a new entry, which
could leave us in a gc "blind spot". With this change, we schedule a
callout to run gc whenever we have entries in the table. The callout
will continue to run every ELEMENT_TIMEOUT seconds until the table is
empty.
Access to rl_gc is locked by rl_lock, so we will never have any threads
racing to callout_{pending,stop,reset}.
The alternative (which Linux does currently) is just to run the callout
every ELEMENT_TIMEOUT (1) second even when no entries are in the table.
However, the callout solution proposed here seems simple enough.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Nothing serious here, just use a goto in wg_deliver_{in,out} rather than
another if/else indentation. The code should have no functional change,
just improve readability.
Additionally, use a local `sc` variable rather than `peer->p_sc` in
spots.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So the last change broke consuming responses, as it may return an
invalid remote pointer. Thanks for the catch zx2c4. We just pass a flag
"lookup_keypair" which will lookup the keypair when we want (for cookie)
and will not when we don't (for consuming responses).
It would be possible to merge both noise_remote_index_lookup and
noise_keypair_lookup, but the result would probably need to return a
void * (for both keypair and remote) or a noise_index * which would need
to be cast to the relevant type somewhere. The trickiest thing here
would be for if_wg to "put" the result of the function, as it may be a
remote or a keypair (which store their refcount in different locations).
Perhaps it would return a noise_index * which could contain the refcount
for both keypair and remote. It all seems easier to leave them separate.
The only argument for combining them would be to reduce duplication of
(similar) functions.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
| |
This is needed, to remove the peer from the public key hashtable before
calling noise_remote_destroy. This will prevent any incoming handshakes
from starting in that time. It also cleans up the insert path to make it
more like it was before the wg_noise EPOCH changes.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This happens if a jail does not have an interface with a configured v4
or v6 address. In that case, we just fall back to only having one socket
for the address family that does exist. In the case that neither socket
can be created, fail as before.
Closes: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254212
Reported-by: Mark Johnston <markj@FreeBSD.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
|
|
|
|
|
|
| |
This is a fixup of f685f466, where previously we chacha'd in a
different loop to poly'ing. Now we do in the same loop to keep the cache
hot. In practice this didn't result in an (easily) observable change,
which could be due to only having 1-2 mbufs in a chain. However this is
still the preferred way to do it.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
| |
Zeroing these values broke TCP recv, so not great, just remove them and
hope they don't store anything secret.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
| |
While on __LP64__ uint64_t is unsigned long, that is not the case for
!__LP64__, which is commonly unsigned long long. Here we use the PRIu64
macro as defined in machine/_inttypes.h.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a couple of routines to encrypt the mbufs in place. It
is likely that these will be replaced by something in opencrypto,
however for the time being this fixes a heap overflow and sets up
wg_noise for the "correct" API. When the time comes, this should make it
easier to drop in new crypto. It should be noted, this was written at
0500.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
| |
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was a lot of junk that didn't need to be here. This commit cleans
it all up and replaces with these three functions:
* add
* lookup
* remove_all
We've removed wg_aip_table because the abstraction there is minimal, as
well as the fact that t_count was never used. The interface of lookup
has changed to provide the af and the address pointer, rather than an
mbuf and direction. This simplifies the lookup code, as well as aligning
better with the calling locations.
We've also changed the list type of `p_aips` from CK_LIST to a regular
LIST, as we only modify the list while holding `sc_lock`.
Additionally, we keep a count of allowed ips in memory, rather than
counting them in wgc_get. While on this though, I think we're safe to
remove the panic checks in wgc_get (that the number of peers/allowed
ips) have changed underneath us. But I'll leave that for another day.
There is a new TODO, which is to check the response of rn_inithead.
While at first glance this may appear to be a regression, in actual
fact these were never really checked. wg_aip_init would check, and free
if appropriate, returning an error - but wg_clone_create would never
check the return value of wg_aip_init, so it is possible that t_ip4 and
t_ip6 may be NULL in a created interface.
The algorithm for removing all allowed ips for a peer should be a lot
quicker, that is, we don't need to traverse the entire graph to remove
our entries. Instead, we just iterate over the list of entries stored in
the peer. This will speedup in the case of a lot of peers with a small
number of allowed ips each.
It might still be worth porting the self test for allowed ips (I still
want to port over a couple of other tests too), but again, that is a
task for another day.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are likely bad hangovers from OpenBSD because:
OpenBSD: uint64_t == unsigned long long
FreeBSD: uint64_t == unsigned long
It makes sense to use the native platform types in the calls to printf,
so convert ull to ul and remove the casts.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is more sophisticated loop detection than OpenBSD, due to the loop
detection relying on a "cookie". Each "cookie" is unique to the peer (in
this case we use the peer id) and allows us to track which peers a
packet has been sent to.
Each time a packet hits wg_transmit, if_tunnel_check_nesting will count
the number of correspinding tags (indexed by ifp, peer->p_id). If this
is greater than or equal to 1 (i.e. it has been sent to this peer
before), then raise an error.
Currently the cookie is a uint32_t, which means the p_id gets truncated.
This isn't ideal as it may cause conflicts (if a user adds 2**32 + 1
peers to an interface). However, I'm not too concerned about this for
the time being because nested routing is uncommon and this is an
improvement over the current situation which would likely DoS a host if
a packet was sent in a nested configuration.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`wg_tag` is a source of trouble when it comes to handling mbufs. This is
due to the fact that calls to things like m_prepend may free the mbuf
underneath us, which would be bad if the tag is still queued in the
peer's queue.
`wg_tag` has also been made redundant on other platforms due to size
restrictions (80 bytes on OpenBSD) which means we cannot grow it to the
required size to hold new fields. With wg_packet, this is no longer a
concern.
This patch includes an import of the send/recv paths (from OpenBSD) to
ensure we don't leak an refcounts. This additionally solves two of the
TODOs as well (chop rx padding, don't copy mbuf). The second TODO is
helpful, because we no longer need to allocate mbufs of a specific size
when encrypting, meaning we no longer have an upper bound on the MTU.
(rebase) On second thoughts, that m_defrag is deadly, as it does not
behave the same as m_defrag on OpenBSD. If the packet is large enough,
there will still be multiple clusters, so treating the first mbuf as the
whole buffer may lead to a heap overflow. This is addressed by the
"encrypt mbuf in place" commit, so while is an issue here, it is already
resolved. To say it in caps:
THIS COMMIT INTRODUCES A VULN, FIXED BY: encrypt mbuf in place
There could be some discussion around using p_parallel for the staged
and handshake queues. It isn't as idiomatic as I would like, however the
right structure is there so that is something we could address later.
One other thing to consider is that `wg_peer_send_staged` is likely
being called one packet at a time. Is it worthwhile trying to batch
calls together?
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
|