aboutsummaryrefslogtreecommitdiffstats
path: root/include (follow)
AgeCommit message (Collapse)AuthorFilesLines
2010-05-10Bluetooth: Create per controller workqueueMarcel Holtmann1-0/+2
Instead of having a global workqueue for all controllers, it makes more sense to have a workqueue per controller. Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Fix race condition on l2cap_ertm_send()Gustavo F. Padovan1-0/+1
l2cap_ertm_send() can be called both from user context and bottom half context. The socket locks for that contexts are different, the user context uses a mutex(which can sleep) and the second one uses a spinlock_bh. That creates a race condition when we have interruptions on both contexts at the same time. The better way to solve this is to add a new spinlock to lock l2cap_ertm_send() and the vars it access. The other solution was to defer l2cap_ertm_send() with a workqueue, but we the sending process already has one defer on the hci layer. It's not a good idea add another one. The patch refactor the code to create l2cap_retransmit_frames(), then we encapulate the lock of l2cap_ertm_send() for some call. It also changes l2cap_retransmit_frame() to l2cap_retransmit_one_frame() to avoid confusion Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Implement Local Busy Condition handlingGustavo F. Padovan1-0/+6
Supports Local Busy condition handling through a waitqueue that wake ups each 200ms and try to push the packets to the upper layer. If it can push all the queue then it leaves the Local Busy state. The patch modifies the behaviour of l2cap_ertm_reassembly_sdu() to support retry of the push operation. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Make hci_send_acl() voidGustavo F. Padovan1-1/+1
hci_send_acl can't fail, so we can make it void. This patch changes that and all the funcions that use hci_send_acl(). That change exposed a bug on sending connectionless data. We were not reporting the lenght send back to the user space. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Enable option to configure Max Transmission value via sockoptGustavo F. Padovan1-0/+2
With the sockopt extension we can set a per-channel MaxTx value. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Change acknowledgement to use the value of txWindowGustavo F. Padovan1-2/+1
Now that we can set the txWindow we need to change the acknowledgement procedure to ack after each (pi->txWindow/6 + 1). The plus 1 is to avoid the zero value. It also renames pi->num_to_ack to a better name: pi->num_acked. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Add sockopt configuration for txWindow on L2CAPGustavo F. Padovan1-0/+2
Now we can set/get Transmission Window size via sockopt. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Fix configuration of the MPS valueGustavo F. Padovan1-1/+2
We were accepting values bigger than we can accept. This was leading ERTM to drop packets because of wrong FCS checks. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Add timer to Acknowledge I-framesGustavo F. Padovan1-0/+4
We ack I-frames on each txWindow/5 I-frames received, but if the sender stop to send I-frames and it's not a txWindow multiple we can leave some frames unacked. So I added a timer to ack I-frames on this case. The timer expires in 200ms. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Implement 'Send IorRRorRNR' eventGustavo F. Padovan1-9/+11
After receive a RR with P bit set ERTM shall use this funcion to choose what type of frame to reply with F bit = 1. Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-10Bluetooth: Make hci_send_sco() voidGustavo F. Padovan1-1/+1
It also removes an unneeded check for the MTU. The check is done before on sco_send_frame() Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2010-05-08ipv4: remove ip_rt_secret timer (v4)Neil Horman1-1/+0
A while back there was a discussion regarding the rt_secret_interval timer. Given that we've had the ability to do emergency route cache rebuilds for awhile now, based on a statistical analysis of the various hash chain lengths in the cache, the use of the flush timer is somewhat redundant. This patch removes the rt_secret_interval sysctl, allowing us to rely solely on the statistical analysis mechanism to determine the need for route cache flushes. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-06net: Increase NET_SKB_PAD to 64 bytesEric Dumazet1-1/+4
eth_type_trans() & get_rps_cpus() currently need two 64bytes cache lines in packet to compute rxhash. Increasing NET_SKB_PAD from 32 to 64 reduces the need to one cache line only, and makes RPS faster. NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-06netpoll: Use 'bool' for netpoll_rx() return type.David S. Miller1-4/+4
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-06netpoll: add generic support for bridge and bonding devicesWANG Cong4-0/+9
This whole patchset is for adding netpoll support to bridge and bonding devices. I already tested it for bridge, bonding, bridge over bonding, and bonding over bridge. It looks fine now. To make bridge and bonding support netpoll, we need to adjust some netpoll generic code. This patch does the following things: 1) introduce two new priv_flags for struct net_device: IFF_IN_NETPOLL which identifies we are processing a netpoll; IFF_DISABLE_NETPOLL is used to disable netpoll support for a device at run-time; 2) introduce one new method for netdev_ops: ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is removed. 3) introduce netpoll_poll_dev() which takes a struct net_device * parameter; export netpoll_send_skb() and netpoll_poll_dev() which will be used later; 4) hide a pointer to struct netpoll in struct netpoll_info, ditto. 5) introduce ->real_dev for struct netpoll. 6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable netconsole before releasing a slave, to avoid deadlocks. Cc: David Miller <davem@davemloft.net> Cc: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: WANG Cong <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-06Merge branch 'vhost' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostDavid S. Miller1-0/+2
2010-05-05Merge branch 'for-davem' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6David S. Miller6-129/+159
2010-05-05Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6 into for-davemJohn W. Linville6-129/+159
Conflicts: drivers/net/wireless/libertas_tf/cmd.c drivers/net/wireless/libertas_tf/main.c
2010-05-05net: __alloc_skb() speedupEric Dumazet1-1/+6
With following patch I can reach maximum rate of my pktgen+udpsink simulator : - 'old' machine : dual quad core E5450 @3.00GHz - 64 UDP rx flows (only differ by destination port) - RPS enabled, NIC interrupts serviced on cpu0 - rps dispatched on 7 other cores. (~130.000 IPI per second) - SLAB allocator (faster than SLUB in this workload) - tg3 NIC - 1.080.000 pps without a single drop at NIC level. Idea is to add two prefetchw() calls in __alloc_skb(), one to prefetch first sk_buff cache line, the second to prefetch the shinfo part. Also using one memset() to initialize all skb_shared_info fields instead of one by one to reduce number of instructions, using long word moves. All skb_shared_info fields before 'dataref' are cleared in __alloc_skb(). Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-03Merge branch 'net-next' of git://git.kernel.org/pub/scm/linux/kernel/git/vxy/lksctp-devDavid S. Miller3-34/+36
Add missing linux/vmalloc.h include to net/sctp/probe.c Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-03net: rcu fixesEric Dumazet1-0/+29
Add hlist_for_each_entry_rcu_bh() and hlist_for_each_entry_continue_rcu_bh() macros, and use them in ipv6_get_ifaddr(), if6_get_first() and if6_get_next() to fix lockdeps warnings. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-03tun: add ioctl to modify vnet header sizeMichael S. Tsirkin1-0/+2
virtio added mergeable buffers mode where 2 bytes of extra info is put after vnet header but before actual data (tun does not need this data). In hindsight, it would have been better to add the new info *before* the packet: as it is, users need a lot of tricky code to skip the extra 2 bytes in the middle of the iovec, and in fact applications seem to get it wrong, and only work with specific iovec layout. The fact we might need to split iovec also means we might in theory overflow iovec max size. This patch adds a simpler way for applications to handle this, and future proofs the interface against further extensions, by making the size of the virtio net header configurable from userspace. As a result, tun driver will simply skip the extra 2 bytes on both input and output. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: David S. Miller <davem@davemloft.net>
2010-05-02net: Use explicit "unsigned int" instead of plain "unsigned" in netdevice.hDavid S. Miller1-5/+5
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-02net: fix softnet_statChangli Gao1-10/+7
Per cpu variable softnet_data.total was shared between IRQ and SoftIRQ context without any protection. And enqueue_to_backlog should update the netdev_rx_stat of the target CPU. This patch renames softnet_data.total to softnet_data.processed: the number of packets processed in uppper levels(IP stacks). softnet_stat data is moved into softnet_data. Signed-off-by: Changli Gao <xiaosuo@gmail.com> ---- include/linux/netdevice.h | 17 +++++++---------- net/core/dev.c | 26 ++++++++++++-------------- net/sched/sch_generic.c | 2 +- 3 files changed, 20 insertions(+), 25 deletions(-) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-02Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6David S. Miller4-1/+4
2010-05-02net: fix compile error due to double return type in SOCK_DEBUGJan Engelhardt1-1/+1
Fix this one: include/net/sock.h: error: two or more data types in declaration specifiers Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-02net: Inline skb_pull() in eth_type_trans().David S. Miller1-0/+5
In commit 6be8ac2f ("[NET]: uninline skb_pull, de-bloats a lot") we uninlined skb_pull. But in some critical paths it makes sense to inline this thing and it helps performance significantly. Create an skb_pull_inline() so that we can do this in a way that serves also as annotation. Based upon a patch by Eric Dumazet. Signed-off-by: David S. Miller <davem@davemloft.net>
2010-05-01net: sock_def_readable() and friends RCU conversionEric Dumazet3-33/+39
sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we need two atomic operations (and associated dirtying) per incoming packet. RCU conversion is pretty much needed : 1) Add a new structure, called "struct socket_wq" to hold all fields that will need rcu_read_lock() protection (currently: a wait_queue_head_t and a struct fasync_struct pointer). [Future patch will add a list anchor for wakeup coalescing] 2) Attach one of such structure to each "struct socket" created in sock_alloc_inode(). 3) Respect RCU grace period when freeing a "struct socket_wq" 4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct socket_wq" 5) Change sk_sleep() function to use new sk->sk_wq instead of sk->sk_sleep 6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside a rcu_read_lock() section. 7) Change all sk_has_sleeper() callers to : - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock) - Use wq_has_sleeper() to eventually wakeup tasks. - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock) 8) sock_wake_async() is modified to use rcu protection as well. 9) Exceptions : macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq" instead of dynamically allocated ones. They dont need rcu freeing. Some cleanups or followups are probably needed, (possible sk_callback_lock conversion to a spinlock for example...). Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-30sctp: Tag messages that can be Nagle delayed at creation.Vlad Yasevich1-5/+3
When we create the sctp_datamsg and fragment the user data, we know exactly if we are sending full segments or not and how they might be bundled. During this time, we can mark messages a Nagle capable or not. This makes the check at transmit time much simpler. Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30sctp: fast recovery algorithm is per association.Vlad Yasevich1-6/+6
SCTP fast recovery algorithm really applies per association and impacts all transports. Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30sctp: update transport initializationsVlad Yasevich1-1/+1
Right now, sctp transports are not fully initialized and when adding any new fields, they have to be explicitely initialized. This is prone to mistakes. So we switch to calling kzalloc() which makes things much simpler. Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30sctp: Save some room in the sctp_transport by using bitfieldsVlad Yasevich1-22/+27
Saves some room in the sctp_transport structure. Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30sctp: remove 'resent' bit from the chunkVlad Yasevich1-2/+1
The 'resent' bit is used to make sure that we don't update rto estimate based on retransmitted chunks. However, we already have the 'rto_pending' bit that we test when need to update rto, so 'resent' bit is just extra. Additionally, we currently have a bug in that we always set a 'resent' bit and thus rto estimate is only updated by Heartbeats. Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30sctp: discard ABORT chunk with zero verification tag in COOKIE-WAIT stateWei Yongjun1-1/+1
In current implementation if ABORT chunk is received with T flag is set and zero verification tag in COOKIE-WAIT state, the ABORT chunk will be always accepted. This is because in COOKIE-WAIT state, the endpoint does not know the peer's verification tag, and it's zero in the endpoint. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
2010-04-30net: speedup sock_recv_ts_and_drops()Eric Dumazet1-1/+18
sock_recv_ts_and_drops() is fat and slow (~ 4% of cpu time on some profiles) We can test all socket flags at once to make fast path fast again. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-30mac80211: remove deprecated noise field from ieee80211_rx_statusJohn W. Linville1-7/+1
Also remove associated IEEE80211_HW_NOISE_DBM from ieee80211_hw_flags. Signed-off-by: John W. Linville <linville@tuxdriver.com>
2010-04-28net: ip_queue_rcv_skb() helperEric Dumazet1-0/+1
When queueing a skb to socket, we can immediately release its dst if target socket do not use IP_CMSG_PKTINFO. tcp_data_queue() can drop dst too. This to benefit from a hot cache line and avoid the receiver, possibly on another cpu, to dirty this cache line himself. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28net: speedup udp receive pathEric Dumazet1-0/+10
Since commit 95766fff ([UDP]: Add memory accounting.), each received packet needs one extra sock_lock()/sock_release() pair. This added latency because of possible backlog handling. Then later, ticket spinlocks added yet another latency source in case of DDOS. This patch introduces lock_sock_bh() and unlock_sock_bh() synchronization primitives, avoiding one atomic operation and backlog processing. skb_free_datagram_locked() uses them instead of full blown lock_sock()/release_sock(). skb is orphaned inside locked section for proper socket memory reclaim, and finally freed outside of it. UDP receive path now take the socket spinlock only once. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28sctp: Fix skb_over_panic resulting from multiple invalid parameter errors (CVE-2010-1173) (v4)Neil Horman1-0/+1
Ok, version 4 Change Notes: 1) Minor cleanups, from Vlads notes Summary: Hey- Recently, it was reported to me that the kernel could oops in the following way: <5> kernel BUG at net/core/skbuff.c:91! <5> invalid operand: 0000 [#1] <5> Modules linked in: sctp netconsole nls_utf8 autofs4 sunrpc iptable_filter ip_tables cpufreq_powersave parport_pc lp parport vmblock(U) vsock(U) vmci(U) vmxnet(U) vmmemctl(U) vmhgfs(U) acpiphp dm_mirror dm_mod button battery ac md5 ipv6 uhci_hcd ehci_hcd snd_ens1371 snd_rawmidi snd_seq_device snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd_page_alloc snd_ac97_codec snd soundcore pcnet32 mii floppy ext3 jbd ata_piix libata mptscsih mptsas mptspi mptscsi mptbase sd_mod scsi_mod <5> CPU: 0 <5> EIP: 0060:[<c02bff27>] Not tainted VLI <5> EFLAGS: 00010216 (2.6.9-89.0.25.EL) <5> EIP is at skb_over_panic+0x1f/0x2d <5> eax: 0000002c ebx: c033f461 ecx: c0357d96 edx: c040fd44 <5> esi: c033f461 edi: df653280 ebp: 00000000 esp: c040fd40 <5> ds: 007b es: 007b ss: 0068 <5> Process swapper (pid: 0, threadinfo=c040f000 task=c0370be0) <5> Stack: c0357d96 e0c29478 00000084 00000004 c033f461 df653280 d7883180 e0c2947d <5> 00000000 00000080 df653490 00000004 de4f1ac0 de4f1ac0 00000004 df653490 <5> 00000001 e0c2877a 08000800 de4f1ac0 df653490 00000000 e0c29d2e 00000004 <5> Call Trace: <5> [<e0c29478>] sctp_addto_chunk+0xb0/0x128 [sctp] <5> [<e0c2947d>] sctp_addto_chunk+0xb5/0x128 [sctp] <5> [<e0c2877a>] sctp_init_cause+0x3f/0x47 [sctp] <5> [<e0c29d2e>] sctp_process_unk_param+0xac/0xb8 [sctp] <5> [<e0c29e90>] sctp_verify_init+0xcc/0x134 [sctp] <5> [<e0c20322>] sctp_sf_do_5_1B_init+0x83/0x28e [sctp] <5> [<e0c25333>] sctp_do_sm+0x41/0x77 [sctp] <5> [<c01555a4>] cache_grow+0x140/0x233 <5> [<e0c26ba1>] sctp_endpoint_bh_rcv+0xc5/0x108 [sctp] <5> [<e0c2b863>] sctp_inq_push+0xe/0x10 [sctp] <5> [<e0c34600>] sctp_rcv+0x454/0x509 [sctp] <5> [<e084e017>] ipt_hook+0x17/0x1c [iptable_filter] <5> [<c02d005e>] nf_iterate+0x40/0x81 <5> [<c02e0bb9>] ip_local_deliver_finish+0x0/0x151 <5> [<c02e0c7f>] ip_local_deliver_finish+0xc6/0x151 <5> [<c02d0362>] nf_hook_slow+0x83/0xb5 <5> [<c02e0bb2>] ip_local_deliver+0x1a2/0x1a9 <5> [<c02e0bb9>] ip_local_deliver_finish+0x0/0x151 <5> [<c02e103e>] ip_rcv+0x334/0x3b4 <5> [<c02c66fd>] netif_receive_skb+0x320/0x35b <5> [<e0a0928b>] init_stall_timer+0x67/0x6a [uhci_hcd] <5> [<c02c67a4>] process_backlog+0x6c/0xd9 <5> [<c02c690f>] net_rx_action+0xfe/0x1f8 <5> [<c012a7b1>] __do_softirq+0x35/0x79 <5> [<c0107efb>] handle_IRQ_event+0x0/0x4f <5> [<c01094de>] do_softirq+0x46/0x4d Its an skb_over_panic BUG halt that results from processing an init chunk in which too many of its variable length parameters are in some way malformed. The problem is in sctp_process_unk_param: if (NULL == *errp) *errp = sctp_make_op_error_space(asoc, chunk, ntohs(chunk->chunk_hdr->length)); if (*errp) { sctp_init_cause(*errp, SCTP_ERROR_UNKNOWN_PARAM, WORD_ROUND(ntohs(param.p->length))); sctp_addto_chunk(*errp, WORD_ROUND(ntohs(param.p->length)), param.v); When we allocate an error chunk, we assume that the worst case scenario requires that we have chunk_hdr->length data allocated, which would be correct nominally, given that we call sctp_addto_chunk for the violating parameter. Unfortunately, we also, in sctp_init_cause insert a sctp_errhdr_t structure into the error chunk, so the worst case situation in which all parameters are in violation requires chunk_hdr->length+(sizeof(sctp_errhdr_t)*param_count) bytes of data. The result of this error is that a deliberately malformed packet sent to a listening host can cause a remote DOS, described in CVE-2010-1173: http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-1173 I've tested the below fix and confirmed that it fixes the issue. We move to a strategy whereby we allocate a fixed size error chunk and ignore errors we don't have space to report. Tested by me successfully Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28mac80211: notify driver about IBSS statusJohannes Berg1-1/+5
Some drivers (e.g. iwlwifi) need to know and try to figure it out based on other things, but making it explicit is definitely better. Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2010-04-28caif: Rewritten socket implementationSjur Braendeland1-2/+3
Changes: This is a complete re-write of the socket layer. Making the socket implementation more aligned with the other socket layers and using more of the support functions available in sock.c. Lots of code is copied from af_unix (and some from af_irda). Non-blocking mode should be working as well. Signed-off-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28caif: Disconnect without waiting for responseSjur Braendeland1-3/+4
Changes: o Function cfcnfg_disconn_adapt_layer is changed to do asynchronous disconnect, not waiting for any response from the modem. Due to this the function cfcnfg_linkdestroy_rsp does nothing anymore. o Because disconnect may take down a connection before a connect response is received the function cfcnfg_linkup_rsp is checking if the client is still waiting for the response, if not a disconnect request is sent to the modem. o cfctrl is no longer keeping track of pending disconnect requests. o Added function cfctrl_cancel_req, which is used for deleting a pending connect request if disconnect is done before connect response is received. o Removed unused function cfctrl_insert_req2 o Added better handling of connect reject from modem. Signed-off-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28caif: Add reference counting to service layerSjur Braendeland3-0/+40
Changes: o Added functions cfsrvl_get and cfsrvl_put. o Added support release_client to use by socket and net device. o Increase reference counting for in-flight packets from cfmuxl Signed-off-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28caif: Rename functions in cfcnfg and caif_devSjur Braendeland2-7/+9
Changes: o Renamed cfcnfg_del_adapt_layer to cfcnfg_disconn_adapt_layer o Fixed typo cfcfg to cfcnfg o Renamed linkid to channel_id o Updated documentation in caif_dev.h o Minor formatting changes Signed-off-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28sctp: Fix oops when sending queued ASCONF chunksVlad Yasevich1-0/+1
When we finish processing ASCONF_ACK chunk, we try to send the next queued ASCONF. This action runs the sctp state machine recursively and it's not prepared to do so. kernel BUG at kernel/timer.c:790! invalid opcode: 0000 [#1] SMP last sysfs file: /sys/module/ipv6/initstate Modules linked in: sha256_generic sctp libcrc32c ipv6 dm_multipath uinput 8139too i2c_piix4 8139cp mii i2c_core pcspkr virtio_net joydev floppy virtio_blk virtio_pci [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.34-rc4 #15 /Bochs EIP: 0060:[<c044a2ef>] EFLAGS: 00010286 CPU: 0 EIP is at add_timer+0xd/0x1b EAX: cecbab14 EBX: 000000f0 ECX: c0957b1c EDX: 03595cf4 ESI: cecba800 EDI: cf276f00 EBP: c0957aa0 ESP: c0957aa0 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process swapper (pid: 0, ti=c0956000 task=c0988ba0 task.ti=c0956000) Stack: c0957ae0 d1851214 c0ab62e4 c0ab5f26 0500ffff 00000004 00000005 00000004 <0> 00000000 d18694fd 00000004 1666b892 cecba800 cecba800 c0957b14 00000004 <0> c0957b94 d1851b11 ceda8b00 cecba800 cf276f00 00000001 c0957b14 000000d0 Call Trace: [<d1851214>] ? sctp_side_effects+0x607/0xdfc [sctp] [<d1851b11>] ? sctp_do_sm+0x108/0x159 [sctp] [<d1863386>] ? sctp_pname+0x0/0x1d [sctp] [<d1861a56>] ? sctp_primitive_ASCONF+0x36/0x3b [sctp] [<d185657c>] ? sctp_process_asconf_ack+0x2a4/0x2d3 [sctp] [<d184e35c>] ? sctp_sf_do_asconf_ack+0x1dd/0x2b4 [sctp] [<d1851ac1>] ? sctp_do_sm+0xb8/0x159 [sctp] [<d1863334>] ? sctp_cname+0x0/0x52 [sctp] [<d1854377>] ? sctp_assoc_bh_rcv+0xac/0xe1 [sctp] [<d1858f0f>] ? sctp_inq_push+0x2d/0x30 [sctp] [<d186329d>] ? sctp_rcv+0x797/0x82e [sctp] Tested-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Yuansong Qiao <ysqiao@research.ait.ie> Signed-off-by: Shuaijun Zhang <szhang@research.ait.ie> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-28sctp: avoid irq lock inversion while call sk->sk_data_ready()Wei Yongjun1-0/+1
sk->sk_data_ready() of sctp socket can be called from both BH and non-BH contexts, but the default sk->sk_data_ready(), sock_def_readable(), can not be used in this case. Therefore, we have to make a new function sctp_data_ready() to grab sk->sk_data_ready() with BH disabling. ========================================================= [ INFO: possible irq lock inversion dependency detected ] 2.6.33-rc6 #129 --------------------------------------------------------- sctp_darn/1517 just changed the state of lock: (clock-AF_INET){++.?..}, at: [<c06aab60>] sock_def_readable+0x20/0x80 but this lock took another, SOFTIRQ-unsafe lock in the past: (slock-AF_INET){+.-...} and interrupts could create inverse lock ordering between them. other info that might help us debug this: 1 lock held by sctp_darn/1517: #0: (sk_lock-AF_INET){+.+.+.}, at: [<cdfe363d>] sctp_sendmsg+0x23d/0xc00 [sctp] Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-27net: disallow to use net_assign_generic externallyJiri Pirko1-7/+2
Now there's no need to use this fuction directly because it's handled by register_pernet_device. So to make this simple and easy to understand, make this static to do not tempt potentional users. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-27net: sk_add_backlog() take rmem_alloc into accountEric Dumazet1-2/+11
Current socket backlog limit is not enough to really stop DDOS attacks, because user thread spend many time to process a full backlog each round, and user might crazy spin on socket lock. We should add backlog size and receive_queue size (aka rmem_alloc) to pace writers, and let user run without being slow down too much. Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in stress situations. Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp receiver can now process ~200.000 pps (instead of ~100 pps before the patch) on a 8 core machine. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-27net: batch skb dequeueing from softnet input_pkt_queueChangli Gao1-2/+4
batch skb dequeueing from softnet input_pkt_queue to reduce potential lock contention when RPS is enabled. Note: in the worst case, the number of packets in a softnet_data may be double of netdev_max_backlog. Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-27net: Make RFS socket operations not be inet specific.David S. Miller2-37/+38
Idea from Eric Dumazet. As for placement inside of struct sock, I tried to choose a place that otherwise has a 32-bit hole on 64-bit systems. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Eric Dumazet <eric.dumazet@gmail.com>