aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/bonding/bond_alb.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2013-12-14bonding: remove the no effect lock for bond_select_active_slave()dingtianhong1-9/+3
The bond slave list was no longer protected by bond lock and only protected by RTNL or RCU, so anywhere that use bond lock to protect slave list is meaningless. remove the release and acquire bond lock for bond_select_active_slave(). The curr_active_slave could only be changed in 3 place: 1. enslave slave. 2. release slave. 3. change_active_slave. all above place were holding bond lock, RTNL and curr_slave_lock together, it is tedious and meaningless, obviously bond lock is no need here, but RTNL or curr_slave_lock is needed, so if you want to access active slave, you have to choose one lock, RTNL or curr_slave_lock, if RTNL is exist, no need to add curr_slave_lock, otherwise curr_slave_lock is better, because of the performance. there are several place calling bond_select_active_slave() and bond_change_active_slave(), the next step I will clean these place and remove the no effect lock. there are some document changed together when update the function. Suggested-by: Jay Vosburgh <fubar@us.ibm.com> Suggested-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-12-06drivers/net/*: Fix FSF address in file headersJeff Kirsher1-2/+1
Several files refer to an old address for the Free Software Foundation in the file header comment. Resolve by replacing the address with the URL <http://www.gnu.org/licenses/> so that we do not have to keep updating the header comments anytime the address changes. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Veaceslav Falico <vfalico@redhat.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Haiyang Zhang <haiyangz@microsoft.com> CC: "K. Y. Srinivasan" <kys@microsoft.com> CC: Paul Mackerras <paulus@samba.org> CC: Ian Campbell <ian.campbell@citrix.com> CC: Wei Liu <wei.liu2@citrix.com> CC: Rusty Russell <rusty@rustcorp.com.au> CC: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-28Revert "Merge branch 'bonding_monitor_locking'"David S. Miller1-4/+16
This reverts commit 4d961a101e032b4bf223b279b4b35bc77576f5a8, reversing changes made to a00f6fcc7d0c62a91768d9c4ccba4c7d64fbbce3. Revert bond locking changes, they cause regressions and Veaceslav Falico doesn't like how the commit messages were done at all. Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-27bonding: remove bond read lock for bond_alb_monitor()dingtianhong1-16/+4
The bond slave list may change when the monitor is running, the slave list is no longer protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it: 1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe to call call_netdevice_notifiers() in write lock. 2.remove unused bond->lock for monitor function, only use the existing rtnl lock(). 3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored. so I remove the bond->lock and move the rtnl lock to protect the whole monitor function. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-17bonding: use RCU protection for alb xmit pathdingtianhong1-15/+43
The commit 278b20837511776dc9d5f6ee1c7fabd5479838bb (bonding: initial RCU conversion) has convert the roundrobin, active-backup, broadcast and xor xmit path to rcu protection, the performance will be better for these mode, so this time, convert xmit path for alb mode. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Cc: Nikolay Aleksandrov <nikolay@redhat.com> Cc: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-08bonding: ensure that TLB mode's active slave has correct mac filterVeaceslav Falico1-0/+17
Currently, in TLB mode we change mac addresses only by memcpy-ing the to net_device->dev_addr, without actually setting them via dev_set_mac_address(). This permits us to receive all the traffic always on one mac address. However, in case the interface flips, some drivers might enforce the mac filtering for its FW/HW based on current ->dev_addr, and thus we won't be able to receive traffic on that interface, in case it will be selected as active in TLB mode. Fix it by setting the mac address forcefully on every new active slave that we select in TLB mode. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Yuval Mintz <yuvalmin@broadcom.com> Reported-by: Yuval Mintz <yuvalmin@broadcom.com> Tested-by: Yuval Mintz <yuvalmin@broadcom.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-26bonding: add bond_has_slaves() and use itVeaceslav Falico1-4/+4
Currently we verify if we have slaves by checking if bond->slave_list is empty. Create a define bond_has_slaves() and use it, a bit more readable and easier to change in the future. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-26bonding: rework rlb_next_rx_slave() to use bond_for_each_slave()Veaceslav Falico1-20/+21
Currently, we're using bond_for_each_slave_from(), which is really hard to implement under RCU and/or neighbour list. Remove it and use bond_for_each_slave() instead, taking care of the last used slave. Also, rename next_rx_slave to rx_slave and store the current (last) rx_slave. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-26bonding: make bond_for_each_slave() use lower neighbour's privateVeaceslav Falico1-7/+11
It needs a list_head *iter, so add it wherever needed. Use both non-rcu and rcu variants. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-26bonding: remove bond_for_each_slave_continue_reverse()Veaceslav Falico1-6/+8
We only use it in rollback scenarios and can easily use the standart bond_for_each_dev() instead. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-26net: add adj_list to save only neighboursVeaceslav Falico1-1/+1
Currently, we distinguish neighbours (first-level linked devices) from non-neighbours by the neighbour bool in the netdev_adjacent. This could be quite time-consuming in case we would like to traverse *only* through neighbours - cause we'd have to traverse through all devices and check for this flag, and in a (quite common) scenario where we have lots of vlans on top of bridge, which is on top of a bond - the bonding would have to go through all those vlans to get its upper neighbour linked devices. This situation is really unpleasant, cause there are already a lot of cases when a device with slaves needs to go through them in hot path. To fix this, introduce a new upper/lower device lists structure - adj_list, which contains only the neighbours. It works always in pair with the all_adj_list structure (renamed from upper/lower_dev_list), i.e. both of them contain the same links, only that all_adj_list contains also non-neighbour device links. It's really a small change visible, currently, only for __netdev_adjacent_dev_insert/remove(), and doesn't change the main linked logic at all. Also, add some comments a fix a name collision in netdev_for_each_upper_dev_rcu() and rework the naming by the following rules: netdev_(all_)(upper|lower)_* If "all_" is present, then we work with the whole list of upper/lower devices, otherwise - only with direct neighbours. Uninline functions - to get better stack traces. CC: "David S. Miller" <davem@davemloft.net> CC: Eric Dumazet <edumazet@google.com> CC: Jiri Pirko <jiri@resnulli.us> CC: Alexander Duyck <alexander.h.duyck@intel.com> CC: Cong Wang <amwang@redhat.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-15bonding: Make alb learning packet interval configurableNeil Horman1-1/+1
running bonding in ALB mode requires that learning packets be sent periodically, so that the switch knows where to send responding traffic. However, depending on switch configuration, there may not be any need to send traffic at the default rate of 3 packets per second, which represents little more than wasted data. Allow the ALB learning packet interval to be made configurable via sysfs Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Acked-by: Veaceslav Falico <vfalico@redhat.com> CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03bonding: use rlb_client_info->vlan_id instead of ->tagVeaceslav Falico1-4/+4
Store VID in ->vlan_id (if any), and remove the useless ->tag. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-03bonding: remove bond_vlan_used()Veaceslav Falico1-4/+2
We're using it currently to verify if we have vlans before getting the tag from the skb we're about to send. It's useless because the vlan_get_tag() verifies if the skb has the tag (and returns an error if not), and we can receive tagged skbs only if we *already* have vlans. Plus, the current RCUed implementation is kind of useless anyway - the we can remove the last vlan in the moment we return from the function. So remove the only usage of it and the whole function. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-29bonding: remove vlan_list/current_alb_vlanVeaceslav Falico1-5/+0
Currently there are no real users of vlan_list/current_alb_vlan, only the helpers which maintain them, so remove them. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-29bonding: make alb_send_learning_packets() use upper dev listVeaceslav Falico1-18/+11
Currently, if there are vlans on top of bond, alb_send_learning_packets() will never send LPs from the bond itself (i.e. untagged), which might leave untagged clients unupdated. Also, the 'circular vlan' logic (i.e. update only MAX_LP_BURST vlans at a time, and save the last vlan for the next update) is really suboptimal - in case of lots of vlans it will take a lot of time to update every vlan. It is also never called in any hot path and sends only a few small packets - thus the optimization by itself is useless. So remove the whole current_alb_vlan/MAX_LP_BURST logic from alb_send_learning_packets(). Instead, we'll first send a packet untagged and then traverse the upper dev list, sending a tagged packet for each vlan found. Also, remove the MAX_LP_BURST define - we already don't need it. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-29bonding: split alb_send_learning_packets()Veaceslav Falico1-24/+35
Create alb_send_lp_vid(), which will handle the skb/lp creation, vlan tagging and sending, and use it in alb_send_learning_packets(). This way all the logic remains in alb_send_learning_packets(), which becomes a lot more cleaner and easier to understand. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-01bonding: initial RCU conversionnikolay@redhat.com1-2/+4
This patch does the initial bonding conversion to RCU. After it the following modes are protected by RCU alone: roundrobin, active-backup, broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for reading, and will be dealt with later. curr_active_slave needs to be dereferenced via rcu in the converted modes because the only thing protecting the slave after this patch is rcu_read_lock, so we need the proper barrier for weakly ordered archs and to make sure we don't have stale pointer. It's not tagged with __rcu yet because there's still work to be done to remove the curr_slave_lock, so sparse will complain when rcu_assign_pointer and rcu_dereference are used, but the alternative to use rcu_dereference_protected would've created much bigger code churn which is more difficult to test and review. That will be converted in time. 1. Active-backup mode 1.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU in bonding - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU in bonding 1.2. Bandwidth measurements - old bonding: 16.1 gbps consistently - new bonding: 17.5 gbps consistently 2. Round-robin mode 2.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU in bonding - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU in bonding 2.2 Bandwidth measurements - old bonding: 8 gbps (variable due to packet reorderings) - new bonding: 10 gbps (variable due to packet reorderings) Of course the latency has improved in all converted modes, and moreover while doing enslave/release (since it doesn't affect tx anymore). Also I've stress tested all modes doing enslave/release in a loop while transmitting traffic. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-08-01bonding: convert to list API and replace bond's custom listnikolay@redhat.com1-31/+20
This patch aims to remove struct bonding's first_slave and struct slave's next and prev pointers, and replace them with the standard Linux list API. The old macros are converted to list API as well and some new primitives are available now. The checks if there're slaves that used slave_cnt have been replaced by the list_empty macro. Also a few small style fixes, changing longest -> shortest line in local variable declarations, leaving an empty line before return and removing unnecessary brackets. This is the first step to gradual RCU conversion. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-19bonding: trivial: make alb use bond_slave_has_mac()Veaceslav Falico1-42/+11
Also, cleanup bond_alb_handle_active_change() from 2 identical ifs. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17bonding: don't call alb_set_slave_mac_addr() while atomicVeaceslav Falico1-35/+5
alb_set_slave_mac_addr() sets the mac address in alb mode via dev_set_mac_address(), which might sleep. It's called from alb_handle_addr_collision_on_attach() in atomic context (under read_lock(bond->lock)), thus triggering a bug. Fix this by moving the lock inside alb_handle_addr_collision_on_attach(). v1->v2: As Nikolay Aleksandrov noticed, we can drop the bond->lock completely. Also, use bond_slave_has_mac(), when possible. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-28bonding: trivial: remove unused parameter from alb_swap_mac_addr()Veaceslav Falico1-4/+4
After b924551 ("bonding: fix enslaving in alb mode when link down") we don't need the bond parameter in alb_swap_mac_addr(), so remove it. Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-19net: vlan: add protocol argument to packet tagging functionsPatrick McHardy1-2/+2
Add a protocol argument to the VLAN packet tagging functions. In case of HW tagging, we need that protocol available in the ndo_start_xmit functions, so it is stored in a new field in the skb. The new field fits into a hole (on 64 bit) and doesn't increase the sks's size. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-04bonding: remove usage of dev->masterJiri Pirko1-3/+3
Benefit from new upper dev list and free bonding from dev->master usage. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-30bonding: delete migrated IP addresses from the rlb hash tableJiri Bohac1-34/+157
Bonding in balance-alb mode records information from ARP packets passing through the bond in a hash table (rx_hashtbl). At certain situations (e.g. link change of a slave), rlb_update_rx_clients() will send out ARP packets to update ARP caches of other hosts on the network to achieve RX load balancing. The problem is that once an IP address is recorded in the hash table, it stays there indefinitely. If this IP address is migrated to a different host in the network, bonding still sends out ARP packets that poison other systems' ARP caches with invalid information. This patch solves this by looking at all incoming ARP packets, and checking if the source IP address is one of the source addresses stored in the rx_hashtbl. If it is, but the MAC addresses differ, the corresponding hash table entries are removed. Thus, when an IP address is migrated, the first ARP broadcast by its new owner will purge the offending entries of rx_hashtbl. The hash table is hashed by ip_dst. To be able to do the above check efficiently (not walking the whole hash table), we need a reverse mapping (by ip_src). I added three new members in struct rlb_client_info: rx_hashtbl[x].src_first will point to the start of a list of entries for which hash(ip_src) == x. The list is linked with src_next and src_prev. When an incoming ARP packet arrives at rlb_arp_recv() rlb_purge_src_ip() can quickly walk only the entries on the corresponding lists, i.e. the entries that are likely to contain the offending IP address. To avoid confusion, I renamed these existing fields of struct rlb_client_info: next -> used_next prev -> used_prev rx_hashtbl_head -> rx_hashtbl_used_head (The current linked list is _not_ a list of hash table entries with colliding ip_dst. It's a list of entries that are being used; its purpose is to avoid walking the whole hash table when looking for used entries.) Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-30bonding: rlb mode of bond should not alter ARP originating via bridgezheng.li1-0/+6
Do not modify or load balance ARP packets passing through balance-alb mode (wherein the ARP did not originate locally, and arrived via a bridge). Modifying pass-through ARP replies causes an incorrect MAC address to be placed into the ARP packet, rendering peers unable to communicate with the actual destination from which the ARP reply originated. Load balancing pass-through ARP requests causes an entry to be created for the peer in the rlb table, and bond_alb_monitor will occasionally issue ARP updates to all peers in the table instrucing them as to which MAC address they should communicate with; this occurs when some event sets rx_ntt. In the bridged case, however, the MAC address used for the update would be the MAC of the slave, not the actual source MAC of the originating destination. This would render peers unable to communicate with the destinations beyond the bridge. Signed-off-by: Zheng Li <zheng.x.li@oracle.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-13bonding: drop_monitor awareEric Dumazet1-3/+3
When packets are dropped in TX path, its better to use kfree_skb() instead of dev_kfree_skb() to give proper drop_monitor events. Also move the kfree_skb() call after read_unlock() in bond_alb_xmit() and bond_xmit_activebackup() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-12bonding: remove packet cloning in recv_probe()Eric Dumazet1-15/+5
Cloning all packets in input path have a significant cost. Use skb_header_pointer()/skb_copy_bits() instead of pskb_may_pull() so that recv_probe handlers (bond_3ad_lacpdu_recv / bond_arp_rcv / rlb_arp_recv ) dont touch input skb. bond_handle_frame() can avoid the skb_clone()/dev_kfree_skb() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Cc: Jiri Bohac <jbohac@suse.cz> Cc: Nicolas de Pesloüan <nicolas.2p.debian@free.fr> Cc: Maciej Żenczykowski <maze@google.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-16Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-5/+7
2012-05-13bonding: Fix LACPDU rx_dropped commit.David S. Miller1-5/+7
I applied the wrong version of Jiri's bonding fix in commit 13a8e0c8cdb43982372bd6c65fb26839c8fd8ce9 ("bonding: don't increase rx_dropped after processing LACPDUs") I applied v3, which introduces warnings I asked him to fix, instead of v4 which properly takes care of those issues. This inter-diffs such that the warnings are now gone. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-10net, drivers/net: Convert compare_ether_addr_64bits to ether_addr_equal_64bitsJoe Perches1-29/+29
Use the new bool function ether_addr_equal_64bits to add some clarity and reduce the likelihood for misuse of compare_ether_addr_64bits for sorting. Done via cocci script: $ cat compare_ether_addr_64bits.cocci @@ expression a,b; @@ - !compare_ether_addr_64bits(a, b) + ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - compare_ether_addr_64bits(a, b) + !ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - !ether_addr_equal_64bits(a, b) == 0 + ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - !ether_addr_equal_64bits(a, b) != 0 + !ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - ether_addr_equal_64bits(a, b) == 0 + !ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - ether_addr_equal_64bits(a, b) != 0 + ether_addr_equal_64bits(a, b) @@ expression a,b; @@ - !!ether_addr_equal_64bits(a, b) + ether_addr_equal_64bits(a, b) Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-31drivers/net: Remove unnecessary k.alloc/v.alloc OOM messagesJoe Perches1-8/+4
alloc failures use dump_stack so emitting an additional out-of-memory message is an unnecessary duplication. Remove the allocation failure messages. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-18bonding: fix enslaving in alb mode when link downJiri Bohac1-18/+9
bond_alb_init_slave() is called from bond_enslave() and sets the slave's MAC address. This is done differently for TLB and ALB modes. bond->alb_info.rlb_enabled is used to discriminate between the two modes but this flag may be uninitialized if the slave is being enslaved prior to calling bond_open() -> bond_alb_initialize() on the master. It turns out all the callers of alb_set_slave_mac_addr() pass bond->alb_info.rlb_enabled as the hw parameter. This patch cleans up the unnecessary parameter of alb_set_slave_mac_addr() and makes the function decide based on the bonding mode instead, which fixes the above problem. Reported-by: Narendra K <Narendra_K@Dell.com> Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-11bond_alb: don't disable softirq under bond_alb_xmitMaxim Uvarov1-36/+76
No need to lock soft irqs under bond_alb_xmit() which already has softirq disabled. Changes: 1. add non-bh/bh version to tlb_clear_slave() 2. represent BH and non BH hash table locks _lock_rx_hashtbl_bh/_unlock_rx_hashtbl_bh _lock_rx_hashtbl/_unlock_rx_hashtbl _lock_tx_hashtbl_bh/_unlock_tx_hashtbl_bh _lock_tx_hashtbl/_unlock_tx_hashtbl Signed-off-by: Maxim Uvarov <maxim.uvarov@oracle.com> Signed-off-by: Cong Wang <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-10-30bonding: eliminate bond_close race conditionsJay Vosburgh1-9/+7
This patch resolves two sets of race conditions. Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> reported the first, as follows: The bond_close() calls cancel_delayed_work() to cancel delayed works. It, however, cannot cancel works that were already queued in workqueue. The bond_open() initializes work->data, and proccess_one_work() refers get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when work->data has been initialized. Thus, a panic occurs. He included a patch that converted the cancel_delayed_work calls in bond_close to flush_delayed_work_sync, which eliminated the above problem. His patch is incorporated, at least in principle, into this patch. In this patch, we use cancel_delayed_work_sync in place of flush_delayed_work_sync, and also convert bond_uninit in addition to bond_close. This conversion to _sync, however, opens new races between bond_close and three periodically executing workqueue functions: bond_mii_monitor, bond_alb_monitor and bond_activebackup_arp_mon. The race occurs because bond_close and bond_uninit are always called with RTNL held, and these workqueue functions may acquire RTNL to perform failover-related activities. If bond_close or bond_uninit is waiting in cancel_delayed_work_sync, deadlock occurs. These deadlocks are resolved by having the workqueue functions acquire RTNL conditionally. If the rtnl_trylock() fails, the functions reschedule and return immediately. For the cases that are attempting to perform link failover, a delay of 1 is used; for the other cases, the normal interval is used (as those activities are not as time critical). Additionally, the bond_mii_monitor function now stores the delay in a variable (mimicing the structure of activebackup_arp_mon). Lastly, all of the above renders the kill_timers sentinel moot, and therefore it has been removed. Tested-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-10-03bonding: properly stop queuing work when requestedAndy Gospodarek1-1/+2
During a test where a pair of bonding interfaces using ARP monitoring were both brought up and torn down (with an rmmod) repeatedly, a panic in the timer code was noticed. I tracked this down and determined that any of the bonding functions that ran as workqueue handlers and requeued more work might not properly exit when the module was removed. There was a flag protected by the bond lock called kill_timers that is set when the interface goes down or the module is removed, but many of the functions that monitor link status now unlock the bond lock to take rtnl first. There is a chance that another CPU running the rmmod could get the lock and set kill_timers after the first check has passed. This patch does not allow any function to queue work that will make itself run unless kill_timers is not set. I also noticed while doing this work that bond_resend_igmp_join_requests did not have a check for kill_timers, so I added the needed call there as well. Signed-off-by: Andy Gospodarek <andy@greyhouse.net> Reported-by: Liang Zheng <lzheng@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21bonding: do vlan cleanupJiri Pirko1-2/+2
Now when all devices are cleaned up, bond can be cleaned up as well - remove bond->vlgrp - remove bond_vlan_rx_register - substitute necessary occurences of vlan_group_get_device Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-25bonding: prevent deadlock on slave store with alb mode (v3)Neil Horman1-4/+0
This soft lockup was recently reported: [root@dell-per715-01 ~]# echo +bond5 > /sys/class/net/bonding_masters [root@dell-per715-01 ~]# echo +eth1 > /sys/class/net/bond5/bonding/slaves bonding: bond5: doing slave updates when interface is down. bonding bond5: master_dev is not up in bond_enslave [root@dell-per715-01 ~]# echo -eth1 > /sys/class/net/bond5/bonding/slaves bonding: bond5: doing slave updates when interface is down. BUG: soft lockup - CPU#12 stuck for 60s! [bash:6444] CPU 12: Modules linked in: bonding autofs4 hidp rfcomm l2cap bluetooth lockd sunrpc be2d Pid: 6444, comm: bash Not tainted 2.6.18-262.el5 #1 RIP: 0010:[<ffffffff80064bf0>] [<ffffffff80064bf0>] .text.lock.spinlock+0x26/00 RSP: 0018:ffff810113167da8 EFLAGS: 00000286 RAX: ffff810113167fd8 RBX: ffff810123a47800 RCX: 0000000000ff1025 RDX: 0000000000000000 RSI: ffff810123a47800 RDI: ffff81021b57f6f8 RBP: ffff81021b57f500 R08: 0000000000000000 R09: 000000000000000c R10: 00000000ffffffff R11: ffff81011d41c000 R12: ffff81021b57f000 R13: 0000000000000000 R14: 0000000000000282 R15: 0000000000000282 FS: 00002b3b41ef3f50(0000) GS:ffff810123b27940(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00002b3b456dd000 CR3: 000000031fc60000 CR4: 00000000000006e0 Call Trace: [<ffffffff80064af9>] _spin_lock_bh+0x9/0x14 [<ffffffff886937d7>] :bonding:tlb_clear_slave+0x22/0xa1 [<ffffffff8869423c>] :bonding:bond_alb_deinit_slave+0xba/0xf0 [<ffffffff8868dda6>] :bonding:bond_release+0x1b4/0x450 [<ffffffff8006457b>] __down_write_nested+0x12/0x92 [<ffffffff88696ae4>] :bonding:bonding_store_slaves+0x25c/0x2f7 [<ffffffff801106f7>] sysfs_write_file+0xb9/0xe8 [<ffffffff80016b87>] vfs_write+0xce/0x174 [<ffffffff80017450>] sys_write+0x45/0x6e [<ffffffff8005d28d>] tracesys+0xd5/0xe0 It occurs because we are able to change the slave configuarion of a bond while the bond interface is down. The bonding driver initializes some data structures only after its ndo_open routine is called. Among them is the initalization of the alb tx and rx hash locks. So if we add or remove a slave without first opening the bond master device, we run the risk of trying to lock/unlock a spinlock that has garbage for data in it, which results in our above softlock. Note that sometimes this works, because in many cases an unlocked spinlock has the raw_lock parameter initialized to zero (meaning that the kzalloc of the net_device private data is equivalent to calling spin_lock_init), but thats not true in all cases, and we aren't guaranteed that condition, so we need to pass the relevant spinlocks through the spin_lock_init function. Fix it by moving the spin_lock_init calls for the tx and rx hashtable locks to the ndo_init path, so they are ready for use by the bond_store_slaves path. Change notes: v2) Based on conversation with Jay and Nicolas it seems that the ability to enslave devices while the bond master is down should be safe to do. As such this is an outlier bug, and so instead we'll just initalize the errant spinlocks in the init path rather than the open path, solving the problem. We'll also remove the warnings about the bond being down during enslave operations, since it should be safe v3) Fix spelling error Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Reported-by: jtluka@redhat.com CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: nicolas.2p.debian@gmail.com CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-09net: bonding: factor out rlock(bond->lock) in xmit pathMichał Mirosław1-9/+2
Pull read_lock(&bond->lock) and BOND_IS_OK() to bond_start_xmit() from mode-dependent xmit functions. netif_running() is always true in hard_start_xmit. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-04-25bonding: move processing of recv handlers into handle_frame()Jiri Pirko1-35/+11
Since now when bonding uses rx_handler, all traffic going into bond device goes thru bond_handle_frame. So there's no need to go back into bonding code later via ptype handlers. This patch converts original ptype handlers into "bonding receive probes". These functions are called from bond_handle_frame and they are registered per-mode. Note that vlan packets are also handled because they are always untagged thanks to vlan_untag() Note that this also allows arpmon for eth-bond-bridge-vlan topology. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-04-11bonding:fix two typosPeter Pan(潘卫平)1-2/+2
replace relpy with reply. replace premanent with permanent. Signed-off-by: Weiping Pan(潘卫平) <panweiping3@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-04-11bonding:set save_load to 0 when initializingPeter Pan(潘卫平)1-1/+1
It is unnecessary to set save_load to 1 here, as the tx_hashtbl is just kzalloced. Signed-off-by: Weiping Pan(潘卫平) <panweiping3@gmail.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-02-28bonding: use the correct size for _simple_hash()Amerigo Wang1-1/+1
Clearly it should be the size of ->ip_dst here. Although this is harmless, but it still reads odd. Signed-off-by: WANG Cong <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-01-20bonding: Ensure that we unshare skbs prior to calling pskb_may_pullNeil Horman1-0/+4
Recently reported oops: kernel BUG at net/core/skbuff.c:813! invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/virtual/net/bond0/broadcast CPU 8 Modules linked in: sit tunnel4 cpufreq_ondemand acpi_cpufreq freq_table bonding ipv6 dm_mirror dm_region_hash dm_log cdc_ether usbnet mii serio_raw i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support shpchp ioatdma i7core_edac edac_core bnx2 ixgbe dca mdio sg ext4 mbcache jbd2 sd_mod crc_t10dif mptsas mptscsih mptbase scsi_transport_sas dm_mod [last unloaded: microcode] Modules linked in: sit tunnel4 cpufreq_ondemand acpi_cpufreq freq_table bonding ipv6 dm_mirror dm_region_hash dm_log cdc_ether usbnet mii serio_raw i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support shpchp ioatdma i7core_edac edac_core bnx2 ixgbe dca mdio sg ext4 mbcache jbd2 sd_mod crc_t10dif mptsas mptscsih mptbase scsi_transport_sas dm_mod [last unloaded: microcode] Pid: 0, comm: swapper Not tainted 2.6.32-71.el6.x86_64 #1 BladeCenter HS22 -[7870AC1]- RIP: 0010:[<ffffffff81405b16>] [<ffffffff81405b16>] pskb_expand_head+0x36/0x1e0 RSP: 0018:ffff880028303b70 EFLAGS: 00010202 RAX: 0000000000000002 RBX: ffff880c6458ec80 RCX: 0000000000000020 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880c6458ec80 RBP: ffff880028303bc0 R08: ffffffff818a6180 R09: ffff880c6458ed64 R10: ffff880c622b36c0 R11: 0000000000000400 R12: 0000000000000000 R13: 0000000000000180 R14: ffff880c622b3000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff880028300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b CR2: 00000038653452a4 CR3: 0000000001001000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process swapper (pid: 0, threadinfo ffff8806649c2000, task ffff880c64f16ab0) Stack: ffff880028303bc0 ffffffff8104fff9 000000000000001c 0000000100000000 <0> ffff880000047d80 ffff880c6458ec80 000000000000001c ffff880c6223da00 <0> ffff880c622b3000 0000000000000000 ffff880028303c10 ffffffff81407f7a Call Trace: <IRQ> [<ffffffff8104fff9>] ? __wake_up_common+0x59/0x90 [<ffffffff81407f7a>] __pskb_pull_tail+0x2aa/0x360 [<ffffffffa0244530>] bond_arp_rcv+0x2c0/0x2e0 [bonding] [<ffffffff814a0857>] ? packet_rcv+0x377/0x440 [<ffffffff8140f21b>] netif_receive_skb+0x2db/0x670 [<ffffffff8140f788>] napi_skb_finish+0x58/0x70 [<ffffffff8140fc89>] napi_gro_receive+0x39/0x50 [<ffffffffa01286eb>] ixgbe_clean_rx_irq+0x35b/0x900 [ixgbe] [<ffffffffa01290f6>] ixgbe_clean_rxtx_many+0x136/0x240 [ixgbe] [<ffffffff8140fe53>] net_rx_action+0x103/0x210 [<ffffffff81073bd7>] __do_softirq+0xb7/0x1e0 [<ffffffff810d8740>] ? handle_IRQ_event+0x60/0x170 [<ffffffff810142cc>] call_softirq+0x1c/0x30 [<ffffffff81015f35>] do_softirq+0x65/0xa0 [<ffffffff810739d5>] irq_exit+0x85/0x90 [<ffffffff814cf915>] do_IRQ+0x75/0xf0 [<ffffffff81013ad3>] ret_from_intr+0x0/0x11 <EOI> [<ffffffff8101bc01>] ? mwait_idle+0x71/0xd0 [<ffffffff814cd80a>] ? atomic_notifier_call_chain+0x1a/0x20 [<ffffffff81011e96>] cpu_idle+0xb6/0x110 [<ffffffff814c17c8>] start_secondary+0x1fc/0x23f Resulted from bonding driver registering packet handlers via dev_add_pack and then trying to call pskb_may_pull. If another packet handler (like for AF_PACKET sockets) gets called first, the delivered skb will have a user count > 1, which causes pskb_may_pull to BUG halt when it does its skb_shared check. Fix this by calling skb_share_check prior to the may_pull call sites in the bonding driver to clone the skb when needed. Tested by myself and the reported successfully. Signed-off-by: Neil Horman CC: Andy Gospodarek <andy@greyhouse.net> CC: Jay Vosburgh <fubar@us.ibm.com> CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-16bonding: migrate some macros from bond_alb.c to bond_alb.hTaku Izumi1-36/+0
This patch simply migrates some macros from bond_alb.c to bond_alb.h. Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-09-14bonding: correctly process non-linear skbsAndy Gospodarek1-0/+3
It was recently brought to my attention that 802.3ad mode bonds would no longer form when using some network hardware after a driver update. After snooping around I realized that the particular hardware was using page-based skbs and found that skb->data did not contain a valid LACPDU as it was not stored there. That explained the inability to form an 802.3ad-based bond. For balance-alb mode bonds this was also an issue as ARPs would not be properly processed. This patch fixes the issue in my tests and should be applied to 2.6.36 and as far back as anyone cares to add it to stable. Thanks to Alexander Duyck <alexander.h.duyck@intel.com> and Jesse Brandeburg <jesse.brandeburg@intel.com> for the suggestions on this one. Signed-off-by: Andy Gospodarek <andy@greyhouse.net> CC: Alexander Duyck <alexander.h.duyck@intel.com> CC: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: stable@kerne.org Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-07-27Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6David S. Miller1-1/+1
Conflicts: drivers/net/bnx2x_main.c Merge bnx2x bug fixes in by hand... :-/ Signed-off-by: David S. Miller <davem@davemloft.net>
2010-07-24bonding: set device in RLB ARP packet handlerGreg Edwards1-1/+1
After: commit 6146b1a4da98377e4abddc91ba5856bef8f23f1e Author: Jay Vosburgh <fubar@us.ibm.com> Date: Tue Nov 4 17:51:15 2008 -0800 bonding: Fix ALB mode to balance traffic on VLANs the dev field in the RLB ARP packet handler was set to NULL to wildcard and accommodate balancing VLANs on top of bonds. This has the side-effect of the packet handler being called against other, non RLB-enabled bonds, and a kernel oops results when it tries to dereference rx_hashtbl in rlb_update_entry_from_arp(), which won't be set for those bonds, e.g. active-backup. With the __netif_receive_skb() changes from: commit 1f3c8804acba841b5573b953f5560d2683d2db0d Author: Andy Gospodarek <andy@greyhouse.net> Date: Mon Dec 14 10:48:58 2009 +0000 bonding: allow arp_ip_targets on separate vlans to use arp validation frames received on VLANs correctly make their way to the bond's handler, so we no longer need to wildcard the device. The oops can be reproduced by: modprobe bonding echo active-backup > /sys/class/net/bond0/bonding/mode echo 100 > /sys/class/net/bond0/bonding/miimon ifconfig bond0 xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx echo +eth0 > /sys/class/net/bond0/bonding/slaves echo +eth1 > /sys/class/net/bond0/bonding/slaves echo +bond1 > /sys/class/net/bonding_masters echo balance-alb > /sys/class/net/bond1/bonding/mode echo 100 > /sys/class/net/bond1/bonding/miimon ifconfig bond1 xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx echo +eth2 > /sys/class/net/bond1/bonding/slaves echo +eth3 > /sys/class/net/bond1/bonding/slaves Pass some traffic on bond0. Boom. [ Tested, behaves as advertised. I do not believe a test of the bonding mode is necessary, as there is no race between the packet handler and the bonding mode changing (the mode can only change when the device is closed). Also updated the log message to include the reproduction and full commit ids. -J ] Signed-off-by: Greg Edwards <greg.edwards@hp.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Acked-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-07-22bonding: change test for presence of VLANsJay Vosburgh1-2/+2
After commit ad1afb00393915a51c21b1ae8704562bf036855f ("vlan_dev: VLAN 0 should be treated as "no vlan tag" (802.1p packet)") it is now regular practice for a VLAN "add vid" for VLAN 0 to arrive prior to any VLAN registration or creation of a vlan_group. This patch updates the bonding code that tests for the presence of VLANs configured above bonding. The new logic tests for bond->vlgrp to determine if a registration has occured, instead of testing that bonding's internal vlan_list is empty. The old code would panic when vlan_list was not empty, but vlgrp was still NULL (because only an "add vid" for VLAN 0 had occured). Bonding still adds VLAN 0 to its internal list so that 802.1p frames are handled correctly on transmit when non-VLAN accelerated slaves are members of the bond. The test against bond->vlan_list remains in bond_dev_queue_xmit for this reason. Modification to the bond->vlgrp now occurs under lock (in addition to RTNL), because not all inspections of it occur under RTNL. Additionally, because 8021q will never issue a "kill vid" for VLAN 0, there is now logic in bond_uninit to release any remaining entries from vlan_list. Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Cc: Pedro Garcia <pedro.netdev@dondevamos.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-07-07Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6David S. Miller1-1/+2