summaryrefslogtreecommitdiffstats
path: root/sys/kern/uipc_socket.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* Extend the scope of the socket lock to protect `so_state' in connect(2).mpi2017-07-241-7/+6
| | | | | | | As a side effect, soconnect() and soconnect2() now expect a locked socket, so update all the callers. ok bluhm@
* If pool_get() sleeps while allocating additional memory for socketbluhm2017-07-201-5/+16
| | | | | | | splicing, another process may allocate it in the meantime. Then one of the splicing structures leaked in sosplice(). Recheck that no struct sosplice exists after a protential sleep. reported by Ilja Van Sprundel; OK mpi@
* Prepare filt_soread() to be locked. No functionnal change.mpi2017-07-201-9/+14
| | | | ok bluhm@, claudio@, visa@
* Do not unlock the netlock in the goto out error path before it hasbluhm2017-07-131-3/+4
| | | | | been acquired in sosend(). Fixes a kernel lock assertion panic. OK visa@ mpi@
* Revert grabbing the socket lock in kqueue filters.mpi2017-07-081-11/+2
| | | | | | | It is unsafe to sleep while iterating the list of pending events in kqueue_scan(). Reported by abieber@ and juanfra@
* Always hold the socket lock when calling sblock().mpi2017-07-041-25/+26
| | | | | | Implicitely protects `so_state' with the socket lock in sosend(). ok visa@, bluhm@
* Protect `so_state', `so_error' and `so_qlen' with the socket lock inmpi2017-07-031-14/+23
| | | | | | kqueue filters. ok millert@, bluhm@, visa@
* Add missing solock()/sounlock() dances around sbreserve().mpi2017-06-271-1/+7
| | | | | | While here document an abuse of parent socket's lock. Problem reported by krw@, analysis and ok bluhm@
* Assert that the corresponding socket is locked when manipulating socketmpi2017-06-261-13/+19
| | | | | | | | | | | | | | | | buffers. This is one step towards unlocking TCP input path. Note that all the functions asserting for the socket lock are not necessarilly MP-safe. All the fields of 'struct socket' aren't protected. Introduce a new kernel-only kqueue hint, NOTE_SUBMIT, to be able to tell when a filter needs to lock the underlying data structures. Logic and name taken from NetBSD. Tested by Hrvoje Popovski. ok claudio@, bluhm@, mikeb@
* In ddb print socket bit field so_state in hex to match SS_ defines.bluhm2017-06-201-2/+2
|
* Convert sodidle() to timeout_set_proc(9), it needs a process contextmpi2017-06-201-2/+2
| | | | | | | | to grab the rwlock. Problem reported by Rivo Nurges. ok bluhm@
* new socketoption SO_ZEROIZE: zero out all mbufs sent over socketmarkus2017-05-311-1/+5
| | | | ok deraadt bluhm
* Push the NET_LOCK down into PF_KEY so that it can be treated like PF_ROUTE.claudio2017-05-271-2/+3
| | | | | | Only pfkeyv2_send() needs the NET_LOCK() so grab it at the start and release at the end. This should allow to push the locks down in other places. OK mpi@, bluhm@
* so_splicelen needs to be protected by the socket lock. We are nowmpi2017-05-151-3/+4
| | | | | | | safe since we're always holding the KERNEL_LOCK() but we want to move away from that. Suggested by and ok bluhm@
* Enable the NET_LOCK(), take 3.mpi2017-05-151-2/+4
| | | | | | Recursions are still marked as XXXSMP. ok deraadt@, bluhm@
* Less convoluted code in soshutdown()deraadt2017-04-021-3/+3
| | | | ok guenther
* Revert the NET_LOCK() and bring back pf's contention lock for release.mpi2017-03-171-4/+2
| | | | | | | | | For the moment the NET_LOCK() is always taken by threads running under KERNEL_LOCK(). That means it doesn't buy us anything except a possible deadlock that we did not spot. So make sure this doesn't happen, we'll have plenty of time in the next release cycle to stress test it. ok visa@
* Move PRU_ATTACH out of the pr_usrreq functions into pr_attach.claudio2017-03-131-4/+4
| | | | | | | Attach is quite a different thing to the other PRU functions and this should make locking a bit simpler. This also removes the ugly hack on how proto was passed to the attach function. OK bluhm@ and mpi@ on a previous version
* Do not grab the NET_LOCK() for routing sockets operations.mpi2017-03-071-2/+3
| | | | | | | | The only function that need the lock is rtm_output() as it messes with the routing table. So grab the lock there since it is safe to sleep in a process context. ok bluhm@
* Prevent a recursion in the socket layer.mpi2017-03-031-6/+2
| | | | | | | | | Always defere soreceive() to an nfsd(8) process instead of doing it in the 'softnet' thread. Avoiding this recursion ensure that we do not introduce a new sleeping point by releasing and grabbing the netlock. Tested by many, committing now in order to find possible performance regression.
* Wrap the NET_LOCK() into a per-socket solock() that does nothing formpi2017-02-141-66/+65
| | | | | | | | unix domain sockets. This should prevent the multiple deadlock related to unix domain sockets. Inputs from millert@ and bluhm@, ok bluhm@
* In sogetopt, preallocate an mbuf to avoid using sleeping mallocs withdhill2017-02-011-11/+21
| | | | | | | | the netlock held. This also changes the prototypes of the *ctloutput functions to take an mbuf instead of an mbuf pointer. help, guidance from bluhm@ and mpi@ ok bluhm@
* In sosend() the size of the control message for file descriptorbluhm2017-01-271-2/+2
| | | | | | | passing is checked. As the data type has changed in unp_internalize(), the calculation has to be adapted in sosend(). Found by relayd regress test on i386. OK millert@
* Do not hold the netlock while pool_get() may sleep. It is notbluhm2017-01-261-2/+2
| | | | | | necessary to lock code that initializes a new socket structure before it has been linked to any global list. OK mpi@
* As NET_LOCK() is a read/write lock, it can sleep in sotask(). Sobluhm2017-01-251-3/+2
| | | | | the TASKQ_CANTSLEEP flag is no longer valid for the splicing thread. OK mikeb@
* Enable the NET_LOCK(), take 2.mpi2017-01-251-8/+15
| | | | | | Recursions are currently known and marked a XXXSMP. Please report any assert to bugs@
* Change NET_LOCK()/NET_UNLOCK() to be simple wrappers aroundmpi2016-12-291-7/+4
| | | | | | | | | | | splsoftnet()/splx() until the known issues are fixed. In other words, stop using a rwlock since it creates a deadlock when chrome is used. Issue reported by Dimitris Papastamos and kettenis@ ok visa@
* Grab the NET_LOCK() in so{s,g}etopt(), pffasttimo() and pfslowtimo().mpi2016-12-201-11/+11
| | | | ok rzalamena@, bluhm@
* Introduce the NET_LOCK() a rwlock used to serialize accesses to the partsmpi2016-12-191-60/+64
| | | | | | | | | | | of the network stack that are not yet ready to be executed in parallel or where new sleeping points are not possible. This first pass replace all the entry points leading to ip_output(). This is done to not introduce new sleeping points when trying to acquire ART's write lock, needed when a new L2 entry is created via the RT_RESOLVE. Inputs from and ok bluhm@, ok dlg@
* m_free() and m_freem() test for NULL. Simplify callers which had their ownjsg2016-11-291-7/+4
| | | | | | NULL tests. ok mpi@
* Some socket splicing tests on loopback hang with large mbufs andbluhm2016-11-231-3/+11
| | | | | | | reduced buffer size. If the send buffer size is less than the size of a single mbuf, it will never fit. So if the send buffer is empty, split the large mbuf and move only a part. OK claudio@
* Enforce that pr_ctloutput is called at IPL_SOFTNET.mpi2016-11-221-13/+29
| | | | | | | This will allow us to keep locking simple as soon as we trade splsoftnet() for a rwlock. ok bluhm@
* Enforce that pr_usrreq functions are called at IPL_SOFTNET.mpi2016-11-211-1/+3
| | | | | | | This will allow us to keep locking simple as soon as we trade splsoftnet() for a rwlock. ok bluhm@, claudio@
* Remove splnet() from socket kqueue code.mpi2016-11-141-11/+8
| | | | | | | | splnet() was necessary when link state changes were executed from hardware interrupt handlers, nowdays all the changes are serialized by the KERNEL_LOCK() so assert that it is held instead. ok mikeb@
* Remove redundant comments that say a function must be called atbluhm2016-10-061-8/+1
| | | | | splsoftnet() if the function does a splsoftassert(IPL_SOFTNET) anyway.
* Separate splsoftnet() from variable initialization.bluhm2016-10-061-9/+8
| | | | From mpi@'s netlock diff; OK mikeb@
* Protect soshutdown() with splsoftnet() to define one layer wherebluhm2016-09-201-4/+10
| | | | | we enter networking code. Fixes an splassert() found by David Hill. OK mikeb@
* Add some spl softnet assertions that will help us to find the rightbluhm2016-09-201-12/+11
| | | | | | places for the upcoming network lock. This might trigger some asserts, but we have to find the missing code paths. OK mpi@
* all pools have their ipl set via pool_setipl, so fold it into pool_init.dlg2016-09-151-6/+5
| | | | | | | | | | | | | | | | | | | | | | the ioff argument to pool_init() is unused and has been for many years, so this replaces it with an ipl argument. because the ipl will be set on init we no longer need pool_setipl. most of these changes have been done with coccinelle using the spatch below. cocci sucks at formatting code though, so i fixed that by hand. the manpage and subr_pool.c bits i did myself. ok tedu@ jmatthew@ @ipl@ expression pp; expression ipl; expression s, a, o, f, m, p; @@ -pool_init(pp, s, a, o, f, m, p); -pool_setipl(pp, ipl); +pool_init(pp, s, a, ipl, f, m, p);
* Do not raise splsoftnet() recursively in soaccept().mpi2016-09-131-3/+3
| | | | | | | This is not an issue right now, but it will become one when an non recursive lock will be used. ok claudio@
* If sosend() cannot allocate a large cluster, try a small one asbluhm2016-09-031-1/+3
| | | | | fallback. OK claudio@
* Return immediately when m_getuio() fails by invalid uio parameter.yasuoka2016-09-031-1/+3
| | | | ok mikeb bluhm claudio
* Spliced TCP sockets become faster when the output part is runningbluhm2016-08-251-9/+48
| | | | | | | | | | | | | as its own task thread. This is inspired by userland copy where a process also has to go through the scheduler. This gives the socket buffer a chance to be filled up and tcp_output() is called less often and with bigger chunks. When two kernel tasks share all the workload, the current scheduler implementation will hang userland processes on single cpu machines. As a workaround put a yield() into the splicing thread after each task execution. This reduces the number of calls of tcp_output() even more. OK tedu@ mpi@
* Completely revert the M_WAIT change on the cluster allocation andbluhm2016-08-251-9/+5
| | | | | | | bring back the behaviour of rev 1.72. Although allocating small mbufs when allocating an mbuf cluster fails seems suboptimal, this should not be changed as a side effect when introducing m_getuio(). OK claudio@
* Refactor the uio to mbuf code out of sosend and start to make use ofclaudio2016-08-221-53/+82
| | | | | | MCLGETI and large mbuf clusters. This should speed up local connections a fair bit. OK dlg@ and bluhm@ (after reverting the M_WAIT change on the cluster allocation)
* On localhost a user program may create a socket splicing loop.bluhm2016-06-131-2/+11
| | | | | | | | After writing data into this loop, it was spinning forever causing a kernel hang. Detect the loop by counting how often the same mbuf is spliced. If that happens 128 times, assume that there is a loop and abort the splicing with ELOOP. Bug found by tedu@; OK tedu@ millert@ benno@
* Fix format string in ddb show socket.bluhm2016-06-121-2/+2
|
* Change a bunch of (<blah> *)0 to NULL.krw2016-03-141-2/+2
| | | | ok beck@ deraadt@
* Improve the socket panic messages further. claudio@ wants to seebluhm2016-01-151-9/+15
| | | | | | the socket type and dlg@ is interested in the pointers for ddb show socket. OK deraadt@ dlg@
* print TAILQ_NEXT(so, so_qe) toodlg2016-01-151-1/+2
|