| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
|
| |
as |= and &= are non-atomic operations. To avoid additional locks,
put the flags that have to be accessed from interrupt into a separate
sb_flagsintr 32 bit integer field. sb_flagsintr is protected by
splsoftnet.
Input from miod@ deraadt@; OK deraadt@
|
|
|
|
|
| |
containing m_nextpkt chains.
OK markus@
|
| |
|
|
|
|
|
|
| |
This fixes the NFS problems reported on the mailing list
and ensures that accepted sockets have correct socketbuffer
setting. OK blambert@, henning@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Send buffer is scaled by not accounting unacknowledged on the wire
data against the buffer limit. Receive buffer scaling is done similar
to FreeBSD -- measure the delay * bandwith product and base the
buffer on that. The problem is that our RTT measurment is coarse
so it overshoots on low delay links. This does not matter that much
since the recvbuffer is almost always empty.
Add a back pressure mechanism to control the amount of memory
assigned to socketbuffers that kicks in when 80% of the cluster
pool is used.
Increases the download speed from 300kB/s to 4.4MB/s on ftp.eu.openbsd.org.
Based on work by markus@ and djm@.
OK dlg@, henning@, put it in deraadt@
|
|
|
|
|
|
|
|
|
|
| |
supported it doesn't do any harm), so put the KNOTE() in selwakeup() itself and
remove it from any occurences where both are used, except one for kqueue itself
and one in sys_pipe.c (where the selwakeup is under a PIPE_SEL flag).
Based on a diff from tedu.
ok deraadt
|
|
|
|
|
|
| |
just use strings and make things unique.
ok claudio@
|
|
|
|
| |
ok art@, henning@
|
|
|
|
|
|
| |
levels. This will allow for platforms where soft interrupt levels do not
map to real hardware interrupt levels to have soft ipl values overlapping
hard ipl values without breaking spl asserts.
|
|
|
|
|
|
| |
This sort of breaking with traditional and expected behavior annoys me.
"yes!" henning@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of. currently limited to MCLBYTES (2048 bytes) and 4096 bytes until pools
can allocate objects of sizes greater than PAGESIZE.
this allows drivers to ask for "jumbo" packets to fill rx rings with.
the second half of this change is per interface mbuf cluster allocator
statistics. drivers can use the new interface (MCLGETI), which will use
these stats to selectively fail allocations based on demand for mbufs. if
the driver isnt rapidly consuming rx mbufs, we dont allow it to allocate
many to put on its rx ring.
drivers require modifications to take advantage of both the new allocation
semantic and large clusters.
this was written and developed with deraadt@ over the last two days
ok deraadt@ claudio@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get hung in nfs_reconnect() because they do not have the proper
privilages to bind to a socket, by adding a struct proc * argument
to sobind() (and the *_usrreq() routines, and finally in{6}_pcbbind)
and do the sobind() with proc0 in nfs_connect.
OK markus@, blambert@.
"go ahead" deraadt@.
Fixes an issue reported by bernd@ (Tested by bernd@).
Fixes PR5135 too.
|
|
|
|
|
|
| |
mclpool as an extern, do so explicitly
ok henning@ claudio@
|
|
|
|
|
|
| |
by adding a sb_datacc count to sockbuf that counts data excluding
MT_CONTROL and MT_SONAME mbuf types. w/help from deraadt@.
okay deraadt@ claudio@
|
| |
|
| |
|
|
|
|
| |
created the socket, and populate it. ok bob@, henning@
|
|
|
|
| |
CV ----------------------------------------------------------------------
|
|
|
|
|
|
|
|
| |
to get some experience with these ideas.
add sbcheckreserve() api; called by accepting sockets. if over 95% of
mbuf clusters are busy, consider this a resource starvation just like the
other reasons for accept failing. also, if over 50% of mbuf clusters are
busy, shrink recv & send sockbuf reserves to "the minimum".
|
| |
|
|
|
|
| |
takes a void *. convert uiomove to take a void * as well. ok deraadt@
|
|
|
|
| |
rescinded 22 July 1999. Proofed by myself and Theo.
|
|
|
|
| |
Diff generated by Chris Kuethe.
|
| |
|
|
|
|
| |
Bug report from Alistair Kerr, tested miod@, inspected art@, ok provos@
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make insertion of data into socket buffers O(C):
* Keep pointers to the first and last mbufs of the last record in the
socket buffer.
* Use the sb_lastrecord pointer in the sbappend*() family of functions
to avoid traversing the packet chain to find the last record.
* Add a new sbappend_stream() function for stream protocols which
guarantee that there will never be more than one record in the
socket buffer. This function uses the sb_mbtail pointer to perform
the data insertion. Make TCP use sbappend_stream(). On a profiling
run, this makes sbappend of a TCP transmission using
a 1M socket buffer go from 50% of the time to .02% of the time. Thanks
to Bill Sommerfeld and YAMAMOTO Takashi for their debugging
assistance!
|
| |
|
| |
|
| |
|
|
|
|
| |
based on freebsd. okay art@ markus@
|
|
|
|
| |
Give different names for different wait channels
|
| |
|
|
|
|
|
|
|
|
| |
from NetBSD:
Wed Jan 7 23:47:08 1998 UTC by thorpej
Make insertion and removal of sockets from the partial and incoming
connections queues O(C) rather than O(N).
|
|
|
|
| |
Stops a nasty little program supplied by gustavo@core-sdi.com
|
|
|
|
|
|
| |
#define sonewconn(head, connstatus) sonewconn1((head), (connstatus))
Just wastes preprocessor time.
|
| |
|
| |
|
| |
|
|
|
|
| |
okay art@, millert@
|
|
|
|
| |
last cmsg_data item (see the figure on RFC2292 page 18).
|
|
|
|
|
|
| |
only ipv6 tools (which touches ancillary data) are affected.
From: =?iso-8859-1?Q?G=F6ran_Bengtson?= <goeran@cdg.chalmers.se>
|
| |
|
|
|
|
|
|
|
|
|
| |
replaces NRL IPv6 layer. reuses NRL pcb layer. no IPsec-on-v6 support.
see sys/netinet6/{TODO,IMPLEMENTATION} for more details.
GENERIC configuration should work fine as before. GENERIC.v6 works fine
as well, but you'll need KAME userland tools to play with IPv6 (will be
bringed into soon).
|
| |
|
| |
|
| |
|
| |
|