summaryrefslogtreecommitdiffstats
path: root/sys/kern/uipc_socket2.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* remove some obsolete caststedu2013-04-051-7/+7
|
* Changing the socket buffer flags sb_flags was not interrupt safebluhm2013-01-151-5/+9
| | | | | | | | as |= and &= are non-atomic operations. To avoid additional locks, put the flags that have to be accessed from interrupt into a separate sb_flagsintr 32 bit integer field. sb_flagsintr is protected by splsoftnet. Input from miod@ deraadt@; OK deraadt@
* Extend the sbcheck() function to make it work with socket buffersbluhm2012-12-311-9/+11
| | | | | containing m_nextpkt chains. OK markus@
* unneccessary casts to unsigned; ok claudioderaadt2012-04-131-2/+2
|
* Correctly inherit and set the watermarks on socketbuffers.claudio2011-04-041-1/+11
| | | | | | This fixes the NFS problems reported on the mailing list and ensures that accepted sockets have correct socketbuffer setting. OK blambert@, henning@
* TCP send and recv buffer scaling.claudio2010-09-241-17/+22
| | | | | | | | | | | | | | | | | Send buffer is scaled by not accounting unacknowledged on the wire data against the buffer limit. Receive buffer scaling is done similar to FreeBSD -- measure the delay * bandwith product and base the buffer on that. The problem is that our RTT measurment is coarse so it overshoots on low delay links. This does not matter that much since the recvbuffer is almost always empty. Add a back pressure mechanism to control the amount of memory assigned to socketbuffers that kicks in when 80% of the cluster pool is used. Increases the download speed from 300kB/s to 4.4MB/s on ftp.eu.openbsd.org. Based on work by markus@ and djm@. OK dlg@, henning@, put it in deraadt@
* Every selwakeup() should have a matching KNOTE() (even if kqueue isn'tnicm2009-11-091-2/+1
| | | | | | | | | | supported it doesn't do any harm), so put the KNOTE() in selwakeup() itself and remove it from any occurences where both are used, except one for kqueue itself and one in sys_pipe.c (where the selwakeup is under a PIPE_SEL flag). Based on a diff from tedu. ok deraadt
* Don't use char arrays for sleep wchans and reuse them.thib2009-08-101-9/+3
| | | | | | just use strings and make things unique. ok claudio@
* bzero -> PR_ZEROblambert2009-03-301-3/+2
| | | | ok art@, henning@
* Introduce splsoftassert(), similar to splassert() but for soft interruptmiod2009-03-151-2/+2
| | | | | | levels. This will allow for platforms where soft interrupt levels do not map to real hardware interrupt levels to have soft ipl values overlapping hard ipl values without breaking spl asserts.
* Change sbreserve() to return 0 on success, 1 on failure, as god intended.blambert2009-01-131-5/+5
| | | | | | This sort of breaking with traditional and expected behavior annoys me. "yes!" henning@
* add several backend pools to allocate mbufs clusters of various sizes outdlg2008-11-241-4/+4
| | | | | | | | | | | | | | | | | | | of. currently limited to MCLBYTES (2048 bytes) and 4096 bytes until pools can allocate objects of sizes greater than PAGESIZE. this allows drivers to ask for "jumbo" packets to fill rx rings with. the second half of this change is per interface mbuf cluster allocator statistics. drivers can use the new interface (MCLGETI), which will use these stats to selectively fail allocations based on demand for mbufs. if the driver isnt rapidly consuming rx mbufs, we dont allow it to allocate many to put on its rx ring. drivers require modifications to take advantage of both the new allocation semantic and large clusters. this was written and developed with deraadt@ over the last two days ok deraadt@ claudio@
* Deal with the situation when TCP nfs mounts timeout and processesthib2008-05-231-3/+3
| | | | | | | | | | | | | get hung in nfs_reconnect() because they do not have the proper privilages to bind to a socket, by adding a struct proc * argument to sobind() (and the *_usrreq() routines, and finally in{6}_pcbbind) and do the sobind() with proc0 in nfs_connect. OK markus@, blambert@. "go ahead" deraadt@. Fixes an issue reported by bernd@ (Tested by bernd@). Fixes PR5135 too.
* instead of relying on mbuf.h to include pool.h and declareblambert2007-09-191-1/+4
| | | | | | mclpool as an extern, do so explicitly ok henning@ claudio@
* exclude control data from the number of bytes returned by FIONREAD ioctl()kurt2007-02-261-1/+6
| | | | | | by adding a sb_datacc count to sockbuf that counts data excluding MT_CONTROL and MT_SONAME mbuf types. w/help from deraadt@. okay deraadt@ claudio@
* ansi/deregisterjsg2006-01-051-38/+18
|
* remove trailing newline in panic(9); ok millert@ and deraadt@fgsch2005-07-181-2/+2
|
* add a field to struct socket that stores the pid of the process thatdhartmei2005-05-271-1/+2
| | | | created the socket, and populate it. ok bob@, henning@
* change sb_mbmax to: (sb_max/MCLBYTES) * (MSIZE+MCLBYTES); ok deraadtmarkus2004-04-251-4/+3
| | | | CV ----------------------------------------------------------------------
* this is only a work in progress, we can perfect afterwards, but it is timederaadt2004-04-191-2/+31
| | | | | | | | to get some experience with these ideas. add sbcheckreserve() api; called by accepting sockets. if over 95% of mbuf clusters are busy, consider this a resource starvation just like the other reasons for accept failing. also, if over 50% of mbuf clusters are busy, shrink recv & send sockbuf reserves to "the minimum".
* use NULL for ptrs. parts from Joris Vinktedu2004-04-011-13/+13
|
* remove caddr_t casts. it's just silly to cast something when the functiontedu2003-07-211-11/+11
| | | | takes a void *. convert uiomove to take a void * as well. ok deraadt@
* Remove the advertising clause in the UCB license which Berkeleymillert2003-06-021-6/+2
| | | | rescinded 22 July 1999. Proofed by myself and Theo.
* Remove more '\n's from panic() statements. Both trailing and leading.krw2002-10-121-2/+2
| | | | Diff generated by Chris Kuethe.
* constify a few strings. various@ okart2002-10-101-5/+5
|
* Update sb_lastrecord in sbcompress() when the mbuf pointed to is removed.dhartmei2002-08-261-1/+3
| | | | Bug report from Alistair Kerr, tested miod@, inspected art@, ok provos@
* redo socketbuf speedup.provos2002-08-081-61/+169
|
* backout the tree break. ok pb@, art@todd2002-08-081-169/+61
|
* socket buf speedup from thorpej@netbsd, okay art@ ericj@:provos2002-08-081-61/+169
| | | | | | | | | | | | | | | | Make insertion of data into socket buffers O(C): * Keep pointers to the first and last mbufs of the last record in the socket buffer. * Use the sb_lastrecord pointer in the sbappend*() family of functions to avoid traversing the packet chain to find the last record. * Add a new sbappend_stream() function for stream protocols which guarantee that there will never be more than one record in the socket buffer. This function uses the sb_mbtail pointer to perform the data insertion. Make TCP use sbappend_stream(). On a profiling run, this makes sbappend of a TCP transmission using a 1M socket buffer go from 50% of the time to .02% of the time. Thanks to Bill Sommerfeld and YAMAMOTO Takashi for their debugging assistance!
* splassert where necessaryart2002-06-111-4/+4
|
* track egid/rgid on bound/connected sockets too (pf will use this)deraadt2002-05-111-1/+3
|
* sbcompress() can compact mbuf clusters now; from thorpej@netbsdprovos2001-11-301-8/+8
|
* avoid "thundering herd" problem in accept by waking just one process.provos2001-11-281-2/+2
| | | | based on freebsd. okay art@ markus@
* from enami@netbsd:provos2001-11-281-3/+4
| | | | Give different names for different wait channels
* change socket allocation to pool allocator; from netbsd; okay niklas@provos2001-11-271-3/+5
|
* change socket connection queues to use TAILQ_provos2001-11-271-29/+19
| | | | | | | | from NetBSD: Wed Jan 7 23:47:08 1998 UTC by thorpej Make insertion and removal of sockets from the partial and incoming connections queues O(C) rather than O(N).
* At sonewconn() time, copy so_siguid & so_sigeuid to the newly created socket.deraadt2001-09-261-1/+3
| | | | Stops a nasty little program supplied by gustavo@core-sdi.com
* It feels a bit pointless to have:art2001-07-051-7/+4
| | | | | | #define sonewconn(head, connstatus) sonewconn1((head), (connstatus)) Just wastes preprocessor time.
* KNFderaadt2001-06-221-9/+9
|
* Style.angelos2001-05-261-3/+3
|
* prevent overflow in sbreserve; from wollman@freebsd via netbsdprovos2001-05-021-2/+3
|
* support kernel event queues, from FreeBSD by Jonathan Lemon,provos2000-11-161-1/+3
| | | | okay art@, millert@
* more fix to ancillary data alignment. we need padding afteritojun2000-02-291-6/+5
| | | | last cmsg_data item (see the figure on RFC2292 page 18).
* fix alignment problem in ancillary data (alpha).itojun2000-02-181-4/+4
| | | | | | only ipv6 tools (which touches ancillary data) are affected. From: =?iso-8859-1?Q?G=F6ran_Bengtson?= <goeran@cdg.chalmers.se>
* Fix misleading comment.angelos2000-02-041-3/+3
|
* bring in KAME IPv6 code, dated 19991208.itojun1999-12-081-1/+38
| | | | | | | | | replaces NRL IPv6 layer. reuses NRL pcb layer. no IPsec-on-v6 support. see sys/netinet6/{TODO,IMPLEMENTATION} for more details. GENERIC configuration should work fine as before. GENERIC.v6 works fine as well, but you'll need KAME userland tools to play with IPv6 (will be bringed into soon).
* fixed patch for accept/select race; mycroft@netbsd.orgmillert1999-02-191-2/+2
|
* undo select/accept patch, which causes full listen queues apparentlyderaadt1999-02-181-2/+2
|
* Fixes select(2)/accept(2) race condition which permits DoS; mycroft@netbsd.orgmillert1999-01-211-2/+2
|
* add seperate so_euid & so_ruid to struct socket, so that identd is still fast.. Sigh. I will change this again laterderaadt1998-02-141-2/+3
|