| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
ok bluhm@
|
|
|
|
|
|
|
|
| |
used as solock()'s backend to protect the whole layer.
With feedback from mpi@.
ok bluhm@ claudio@
|
|
|
|
|
| |
them in line with sbappendstream() and sbappendrecord().
Agreed by mpi@
|
|
|
|
|
|
|
|
|
|
|
| |
The 3 subsystems: signal, poll/select and kqueue can now be addressed
separatly.
Note that bpf(4) and audio(4) currently delay the wakeups to a separate
context in order to respect the KERNEL_LOCK() requirement. Sockets (UDP,
TCP) and pipes spin to grab the lock for the sames reasons.
ok anton@, visa@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce and use TIMEVAL_TO_NSEC() to convert SO_RCVTIMEO/SO_SNDTIMEO
specified values into nanoseconds. As a side effect it is now possible
to specify a timeout larger that (USHRT_MAX / 100) seconds.
To keep code simple `so_linger' now represents a number of seconds with
0 meaning no timeout or 'infinity'.
Yes, the 0 -> INFSLP API change makes conversions complicated as many
timeout holders are still memset()'d.
Inputs from cheloha@ and bluhm@, ok bluhm@
|
|
|
|
|
|
|
|
|
| |
condition in sb_compress(). Currently the actual cluster size might
be 9KB even if the mtu is 1500, in this case a lot of memory space had
been wasted, since sbcompress() doesn't compress because of previous
condition.
ok dlg claudio
|
|
|
|
|
|
|
|
| |
this makes it easier to call since you don't have to cast to caddr_t
if it's a void *. this also changes a size argument from int to
size_t.
ok claudio@
|
|
|
|
| |
OK mpi@
|
|
|
|
|
|
| |
m_leadingspace() and m_trailingspace(). Convert all callers to call
directly the functions and remove the defines.
OK krw@, mpi@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
back rev 1.90.
----
mbufs and mbuf clusters are now backed by large pools. Because of this
we can relax the oversubscribe limit of socketbuffers a fair bit.
Instead of maxing out as sb_max * 1.125 or 2 * sb_hiwat the maximum is
increased to 8 * sb_hiwat -- which seems to be a good compromise between
memory waste and better socket buffer usage.
OK deraadt@
----
ok benno@
|
|
|
|
|
| |
variables can be delared constant.
OK claudio@ mpi@
|
|
|
|
|
|
|
| |
Instead introduce two flags to deal with global lock recursion. This
is necessary until we get per-socket lock.
Req. by and ok visa@
|
|
|
|
|
|
| |
locking.
ok visa@, bluhm@
|
|
|
|
|
|
|
| |
...and release it in sounlock(). This will allows us to progressively
remove the KERNEL_LOCK() in syscalls.
ok visa@ some time ago
|
|
|
|
|
|
|
| |
AF_UNIX is both the historical _and_ standard name, so prefer and recommend
it in the headers, manpages, and kernel.
ok miller@ deraadt@ schwarze@
|
|
|
|
| |
Requested by claudio@
|
|
|
|
|
|
|
|
| |
we can relax the oversubscribe limit of socketbuffers a fair bit.
Instead of maxing out as sb_max * 1.125 or 2 * sb_hiwat the maximum is
increased to 8 * sb_hiwat -- which seems to be a good compromise between
memory waste and better socket buffer usage.
OK deraadt@
|
|
|
|
| |
ok millert@ krw@
|
|
|
|
|
|
|
|
|
|
|
| |
SB_KNOTE remains the only bit set on `sb_flagsintr' as it is set/unset in
contexts related to kqueue(2) where we'd like to avoid grabbing solock().
While here add some KERNEL_LOCK()/UNLOCK() dances around selwakeup() and
csignal() to mark which remaining functions need to be addressed in the
socket layer.
ok visa@, bluhm@
|
|
|
|
|
|
|
|
|
| |
KERNEL_LOCK(), so change asserts accordingly.
This is now possible since sblock()/sbunlock() are always called with
the socket lock held.
ok bluhm@, visa@
|
|
|
|
| |
Tested by Hrvoje Popovski, ok bluhm@
|
|
|
|
|
|
| |
selwakeup().
ok bluhm@
|
|
|
|
|
|
|
|
| |
with the socket lock.
This change is safe because sbreserve() already asserts that the lock is
held, but it acts as implicit documentation and indicates that I looked
at the function.
|
|
|
|
|
|
| |
Implicitely protects `so_state' with the socket lock in sosend().
ok visa@, bluhm@
|
|
|
|
| |
ok bluhm@, visa@
|
|
|
|
| |
ok bluhm@, visa@
|
|
|
|
|
|
| |
While here document an abuse of parent socket's lock.
Problem reported by krw@, analysis and ok bluhm@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
buffers.
This is one step towards unlocking TCP input path. Note that all the
functions asserting for the socket lock are not necessarilly MP-safe.
All the fields of 'struct socket' aren't protected.
Introduce a new kernel-only kqueue hint, NOTE_SUBMIT, to be able to
tell when a filter needs to lock the underlying data structures. Logic
and name taken from NetBSD.
Tested by Hrvoje Popovski.
ok claudio@, bluhm@, mikeb@
|
|
|
|
|
|
| |
pfkey and unix sockets.
ok claudio@
|
|
|
|
|
|
| |
Only pfkeyv2_send() needs the NET_LOCK() so grab it at the start and release
at the end. This should allow to push the locks down in other places.
OK mpi@, bluhm@
|
|
|
|
|
|
| |
Recursions are still marked as XXXSMP.
ok deraadt@, bluhm@
|
|
|
|
|
|
|
|
|
| |
For the moment the NET_LOCK() is always taken by threads running under
KERNEL_LOCK(). That means it doesn't buy us anything except a possible
deadlock that we did not spot. So make sure this doesn't happen, we'll
have plenty of time in the next release cycle to stress test it.
ok visa@
|
|
|
|
|
|
|
| |
Attach is quite a different thing to the other PRU functions and
this should make locking a bit simpler. This also removes the ugly
hack on how proto was passed to the attach function.
OK bluhm@ and mpi@ on a previous version
|
|
|
|
|
|
|
|
| |
The only function that need the lock is rtm_output() as it messes with
the routing table. So grab the lock there since it is safe to sleep
in a process context.
ok bluhm@
|
|
|
|
|
|
|
|
| |
unix domain sockets.
This should prevent the multiple deadlock related to unix domain sockets.
Inputs from millert@ and bluhm@, ok bluhm@
|
|
|
|
|
|
| |
Recursions are currently known and marked a XXXSMP.
Please report any assert to bugs@
|
|
|
|
|
|
|
|
|
|
|
| |
splsoftnet()/splx() until the known issues are fixed.
In other words, stop using a rwlock since it creates a deadlock when
chrome is used.
Issue reported by Dimitris Papastamos and kettenis@
ok visa@
|
|
|
|
|
|
|
|
|
|
|
| |
of the network stack that are not yet ready to be executed in parallel or
where new sleeping points are not possible.
This first pass replace all the entry points leading to ip_output(). This
is done to not introduce new sleeping points when trying to acquire ART's
write lock, needed when a new L2 entry is created via the RT_RESOLVE.
Inputs from and ok bluhm@, ok dlg@
|
|
|
|
|
|
|
|
|
|
| |
in process context. The read/write lock introduced in rev 1.64
would create lock ordering problems with the upcoming SOCKET_LOCK()
mechanism. The current tsleep() in sblock() must be replaced with
rwsleep(&socketlock) later. The sb_flags are protected by
KERNEL_LOCK(). They must not be accessed from interrupt context,
but nowadays softnet() is not an interrupt anyway.
OK mpi@
|
|
|
|
|
|
| |
have an splsoftassert(IPL_SOFTNET) now, so sowakeup() does not need
to call splsoftnet() anymore.
From mpi@'s netlock diff; OK mikeb@
|
|
|
|
|
| |
splsoftnet() if the function does a splsoftassert(IPL_SOFTNET)
anyway.
|
|
|
|
|
|
|
|
|
|
| |
socket buffer had no space anymore. The default mbuf space limit
was only 32 KB. So no more data from user-land was accepted. As
tcp_output() keeps the mbuf cluster for retransmits, it will be
freed only after all ACKs have been received. That has killed our
TCP send performance totally. To allow cycling through the mbufs
periodically, we need space for at least 3 of them.
Reported by Andreas Bartelt; testing with mikeb@; OK mikeb@ claudio@
|
|
|
|
| |
ok mikeb bluhm
|
|
|
|
| |
CMSG_SIZE(len) bytes of the mbuf.
|
|
|
|
| |
ok bluhm@, claudio@, dlg@
|
|
|
|
|
|
|
|
| |
compatibility with 4.3BSD in September 1989.
*Pick your own definition for "temporary".
ok bluhm@, claudio@, dlg@
|
|
|
|
|
|
|
| |
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
| |
|
|
|
|
| |
ok mpi@ kspillner@
|
|
|
|
|
|
| |
confirmation: it was only used for netiso, which was deleted a *decade* ago
ok mpi@ claudio@ ports scan by sthen@
|