aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/hashtables.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* global: update copyrightJason A. Donenfeld2019-01-071-1/+1
|
* global: give if statements brackets and other cleanupsJason A. Donenfeld2018-10-091-3/+3
|
* global: more nitsJason A. Donenfeld2018-10-081-10/+10
|
* global: rename struct wireguard_ to struct wg_Jason A. Donenfeld2018-10-081-5/+5
| | | | | | This required a bit of pruning of our christmas trees. Suggested-by: Jiri Pirko <jiri@resnulli.us>
* global: prefix all functions with wg_Jason A. Donenfeld2018-10-021-20/+20
| | | | | | | | | | | | | I understand why this must be done, though I'm not so happy about having to do it. In some places, it puts us over 80 chars and we have to break lines up in further ugly ways. And in general, I think this makes things harder to read. Yet another thing we must do to please upstream. Maybe this can be replaced in the future by some kind of automatic module namespacing logic in the linker, or even combined with LTO and aggressive symbol stripping. Suggested-by: Andrew Lunn <andrew@lunn.ch>
* global: put SPDX identifier on its own lineJason A. Donenfeld2018-09-201-2/+2
| | | | | The kernel has very specific rules correlating file type with comment type, and also SPDX identifiers can't be merged with other comments.
* global: remove non-essential inline annotationsJason A. Donenfeld2018-09-161-3/+3
|
* global: run through clang-formatJason A. Donenfeld2018-08-281-36/+69
| | | | | | | This is the worst commit in the whole repo, making the code much less readable, but so it goes with upstream maintainers. We are now woefully wrapped at 80 columns.
* peer: ensure destruction doesn't raceJason A. Donenfeld2018-08-031-2/+4
| | | | | Completely rework peer removal to ensure peers don't jump between contexts and create races.
* hashtables: document immediate zeroing semanticsJason A. Donenfeld2018-08-011-0/+6
| | | | Suggested-by: Jann Horn <jann@thejh.net>
* peer: simplify rcu reference countsJason A. Donenfeld2018-07-311-2/+2
| | | | | | | Use RCU reference counts only when we must, and otherwise use a more reasonably named function. Reported-by: Jann Horn <jann@thejh.net>
* global: year bumpJason A. Donenfeld2018-01-031-1/+1
|
* global: add SPDX tags to all filesGreg Kroah-Hartman2017-12-091-1/+4
| | | | | | | | | | | | | It's good to have SPDX identifiers in all files as the Linux kernel developers are working to add these identifiers to all files. Update all files with the correct SPDX license identifier based on the license text of the project or based on the license in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Modified-by: Jason A. Donenfeld <Jason@zx2c4.com>
* global: style nitsJason A. Donenfeld2017-10-311-3/+6
|
* global: infuriating kernel iterator styleJason A. Donenfeld2017-10-311-4/+4
| | | | | | | | | | | | | | | | | One types: for (i = 0 ... So one should also type: for_each_obj (obj ... But the upstream kernel style guidelines are insane, and so we must instead do: for_each_obj(obj ... Ugly, but one must choose his battles wisely.
* global: accept decent check_patch.pl suggestionsJason A. Donenfeld2017-10-311-1/+1
|
* global: add space around variable declarationsJason A. Donenfeld2017-10-031-0/+2
|
* hashtables: if we have an index match, don't search further everJason A. Donenfeld2017-08-081-2/+3
|
* hashtables: allow up to 2^{20} peers per interfaceJason A. Donenfeld2017-08-081-0/+22
| | | | | | | | | | | | | | | This allows for nearly 1 million peers per interface, which should be more than enough. If needed later, this number could easily be increased beyond this. We also increase the size of the hashtables to accommodate this upper bound. In the future, it might be smart to dynamically expand the hashtable instead of this hard coded compromise value between small systems and large systems. Ongoing work includes figuring out the most optimal scheme for these hashtables and for the insertion to mask their order from timing inference.
* noise: fix race when replacing handshakeJason A. Donenfeld2017-06-081-1/+4
| | | | | | | | Replacing an entry that's already been replaced is something that could happen when processing handshake messages in parallel, when starting up multiple instances on the same machine. Reported-by: Hubert Goisern <zweizweizwoelf@gmail.com>
* style: spaces after for loopsJason A. Donenfeld2017-05-301-4/+4
|
* locking: always use _bhJason A. Donenfeld2017-04-041-19/+19
| | | | | All locks are potentially between user context and softirq, which means we need to take the _bh variant.
* hashtables: get_random_int is now more secure, so expose directlyJason A. Donenfeld2017-03-191-3/+1
| | | | | | | | On 4.11, get_random_u32 now either uses chacha or rdrand, rather than the horrible former MD5 construction, so we feel more comfortable exposing RNG output directly. On older kernels, we fall back to something a bit disgusting.
* compat: backport siphash & dst_cache from mainlineJason A. Donenfeld2017-02-131-7/+4
|
* Update copyrightJason A. Donenfeld2017-01-101-1/+1
|
* hashtables: use counter and int to ensure forward progressJason A. Donenfeld2016-12-161-9/+2
|
* siphash: update against upstream submissionJason A. Donenfeld2016-12-161-6/+6
|
* hashtables: ensure we get 64-bits of randomnessJason A. Donenfeld2016-12-121-1/+7
|
* global: move to consistent use of uN instead of uintN_t for kernel codeJason A. Donenfeld2016-12-111-4/+4
|
* hashtable: use random number each timeJason A. Donenfeld2016-11-291-2/+2
| | | | | | | | | Otherwise timing information might leak information about prior index entries. We also switch back to an explicit uint64_t because siphash needs something at least that size. (This partially reverts 1550e9ba597946c88e3e7e3e8dcf33c13dd76e5b. Willy's suggestion was wrong.)
* headers: cleanup noticesJason A. Donenfeld2016-11-211-1/+1
|
* various: nits from willyJason A. Donenfeld2016-11-151-2/+2
|
* c89: the static keyword is okay in c99, but not in c89Jason A. Donenfeld2016-11-051-2/+2
|
* Rework headers and includesJason A. Donenfeld2016-09-291-2/+2
|
* hashtables: use rdrand() instead of counterJason A. Donenfeld2016-08-221-4/+3
|
* c: specify static array size in function paramsJason A. Donenfeld2016-08-021-2/+2
| | | | | | | | | | | | | | | The C standard states: A declaration of a parameter as ``array of type'' shall be adjusted to ``qualified pointer to type'', where the type qualifiers (if any) are those specified within the [ and ] of the array type derivation. If the keyword static also appears within the [ and ] of the array type derivation, then for each call to the function, the value of the corresponding actual argument shall provide access to the first element of an array with at least as many elements as specified by the size expression. By changing void func(int array[4]) to void func(int array[static 4]), we automatically get the compiler checking argument sizes for us, which is quite nice.
* index hashtable: run random indices through siphashJason A. Donenfeld2016-07-221-1/+5
| | | | | | | | | | | | | | If /dev/urandom is a NOBUS RNG backdoor, like the infamous Dual_EC_DRBG, then sending 4 bytes of raw RNG output over the wire directly might not be such a great idea. This mitigates that vulnerability by, at some point before the indices are generated, creating a random secret. Then, for each session index, we simply run SipHash24 on an incrementing counter. This is probably overkill because /dev/urandom is probably not a backdoored RNG, and itself already uses several rounds of SHA-1 for mixing. If the kernel RNG is backdoored, there may very well be bigger problems at play. Four bytes is also not so many bytes.
* Initial commitJason A. Donenfeld2016-06-251-0/+137