aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/team/team_mode_loadbalance.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2021-08-17bpf: Refactor BPF_PROG_RUN into a functionAndrii Nakryiko1-1/+1
Turn BPF_PROG_RUN into a proper always inlined function. No functional and performance changes are intended, but it makes it much easier to understand what's going on with how BPF programs are actually get executed. It's more obvious what types and callbacks are expected. Also extra () around input parameters can be dropped, as well as `__` variable prefixes intended to avoid naming collisions, which makes the code simpler to read and write. This refactoring also highlighted one extra issue. BPF_PROG_RUN is both a macro and an enum value (BPF_PROG_RUN == BPF_PROG_TEST_RUN). Turning BPF_PROG_RUN into a function causes naming conflict compilation error. So rename BPF_PROG_RUN into lower-case bpf_prog_run(), similar to bpf_prog_run_xdp(), bpf_prog_run_pin_on_cpu(), etc. All existing callers of BPF_PROG_RUN, the macro, are switched to bpf_prog_run() explicitly. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20210815070609.987780-2-andrii@kernel.org
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner1-5/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-04team: Free BPF filter when unregistering netdevIdo Schimmel1-0/+15
When team is used in loadbalance mode a BPF filter can be used to provide a hash which will determine the Tx port. When the netdev is later unregistered the filter is not freed which results in memory leaks [1]. Fix by freeing the program and the corresponding filter when unregistering the netdev. [1] unreferenced object 0xffff8881dbc47cc8 (size 16): comm "teamd", pid 3068, jiffies 4294997779 (age 438.247s) hex dump (first 16 bytes): a3 00 6b 6b 6b 6b 6b 6b 88 a5 82 e1 81 88 ff ff ..kkkkkk........ backtrace: [<000000008a3b47e3>] team_nl_cmd_options_set+0x88f/0x11b0 [<00000000c4f4f27e>] genl_family_rcv_msg+0x78f/0x1080 [<00000000610ef838>] genl_rcv_msg+0xca/0x170 [<00000000a281df93>] netlink_rcv_skb+0x132/0x380 [<000000004d9448a2>] genl_rcv+0x29/0x40 [<000000000321b2f4>] netlink_unicast+0x4c0/0x690 [<000000008c25dffb>] netlink_sendmsg+0x929/0xe10 [<00000000068298c5>] sock_sendmsg+0xc8/0x110 [<0000000082a61ff0>] ___sys_sendmsg+0x77a/0x8f0 [<00000000663ae29d>] __sys_sendmsg+0xf7/0x250 [<0000000027c5f11a>] do_syscall_64+0x14d/0x610 [<000000006cfbc8d3>] entry_SYSCALL_64_after_hwframe+0x49/0xbe [<00000000e23197e2>] 0xffffffffffffffff unreferenced object 0xffff8881e182a588 (size 2048): comm "teamd", pid 3068, jiffies 4294997780 (age 438.247s) hex dump (first 32 bytes): 20 00 00 00 02 00 00 00 30 00 00 00 28 f0 ff ff .......0...(... 07 00 00 00 00 00 00 00 28 00 00 00 00 00 00 00 ........(....... backtrace: [<000000002daf01fb>] lb_bpf_func_set+0x45c/0x6d0 [<000000008a3b47e3>] team_nl_cmd_options_set+0x88f/0x11b0 [<00000000c4f4f27e>] genl_family_rcv_msg+0x78f/0x1080 [<00000000610ef838>] genl_rcv_msg+0xca/0x170 [<00000000a281df93>] netlink_rcv_skb+0x132/0x380 [<000000004d9448a2>] genl_rcv+0x29/0x40 [<000000000321b2f4>] netlink_unicast+0x4c0/0x690 [<000000008c25dffb>] netlink_sendmsg+0x929/0xe10 [<00000000068298c5>] sock_sendmsg+0xc8/0x110 [<0000000082a61ff0>] ___sys_sendmsg+0x77a/0x8f0 [<00000000663ae29d>] __sys_sendmsg+0xf7/0x250 [<0000000027c5f11a>] do_syscall_64+0x14d/0x610 [<000000006cfbc8d3>] entry_SYSCALL_64_after_hwframe+0x49/0xbe [<00000000e23197e2>] 0xffffffffffffffff Fixes: 01d7f30a9f96 ("team: add loadbalance mode") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Amit Cohen <amitc@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-09-19team: fall back to hash if table entry is emptyJim Hanko1-1/+7
If the hash to port mapping table does not have a valid port (i.e. when a port goes down), fall back to the simple hashing mechanism to avoid dropping packets. Signed-off-by: Jim Hanko <hanko@drivescale.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-02team: add macro MODULE_ALIAS_TEAM_MODE for team mode aliasZhang Shengju1-1/+1
Add a new macro MODULE_ALIAS_TEAM_MODE to unify and simplify the declaration of team mode alias. Signed-off-by: Zhang Shengju <zhangshengju@cmss.chinamobile.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-26team: loadbalance: push lacpdus to exact deliveryJiri Pirko1-0/+15
When team is in bridge and LACP is utilized, LACPDU packets are pushed to userspace using raw socket and there they are processed. However, since 8626c56c8279b, LACPDU skbs are dropped by bridge rx_handler so they never reach packet handlers in rx path. Fix this by explicity treat LACPDUs to be pushed to exact delivery in team rx_handler. Reported-by: Ido Schimmel <idosch@mellanox.com> Fixes: 8626c56c8279b ("bridge: fix potential use-after-free when hook returns QUEUE or STOLEN verdict") Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-12-03team: fill-up LAG changeupper info struct and pass it alongJiri Pirko1-0/+1
Initialize netdev_lag_upper_info structure by TX type according to current team mode and pass it along via netdev_master_upper_dev_link. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-02net: filter: split 'struct sk_filter' into socket and bpf partsAlexei Starovoitov1-7/+7
clean up names related to socket filtering and bpf in the following way: - everything that deals with sockets keeps 'sk_*' prefix - everything that is pure BPF is changed to 'bpf_*' prefix split 'struct sk_filter' into struct sk_filter { atomic_t refcnt; struct rcu_head rcu; struct bpf_prog *prog; }; and struct bpf_prog { u32 jited:1, len:31; struct sock_fprog_kern *orig_prog; unsigned int (*bpf_func)(const struct sk_buff *skb, const struct bpf_insn *filter); union { struct sock_filter insns[0]; struct bpf_insn insnsi[0]; struct work_struct work; }; }; so that 'struct bpf_prog' can be used independent of sockets and cleans up 'unattached' bpf use cases split SK_RUN_FILTER macro into: SK_RUN_FILTER to be used with 'struct sk_filter *' and BPF_PROG_RUN to be used with 'struct bpf_prog *' __sk_filter_release(struct sk_filter *) gains __bpf_prog_release(struct bpf_prog *) helper function also perform related renames for the functions that work with 'struct bpf_prog *', since they're on the same lines: sk_filter_size -> bpf_prog_size sk_filter_select_runtime -> bpf_prog_select_runtime sk_filter_free -> bpf_prog_free sk_unattached_filter_create -> bpf_prog_create sk_unattached_filter_destroy -> bpf_prog_destroy sk_store_orig_filter -> bpf_prog_store_orig_filter sk_release_orig_filter -> bpf_release_orig_filter __sk_migrate_filter -> bpf_migrate_filter __sk_prepare_filter -> bpf_prepare_filter API for attaching classic BPF to a socket stays the same: sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *) and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program which is used by sockets, tun, af_packet API for 'unattached' BPF programs becomes: bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *) and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-31team: fix releasing uninitialized pointer to BPF progDaniel Borkmann1-1/+1
Commit 34c5bd66e5ed introduced the possibility that an uninitialized pointer on the stack (orig_fp) can call into sk_unattached_filter_destroy() when its value is non NULL. Before that commit orig_fp was only destroyed in the same block where it was assigned a valid BPF prog before. Fix it up by initializing it to NULL. Fixes: 34c5bd66e5ed ("net: filter: don't release unattached filter through call_rcu()") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Pablo Neira <pablo@netfilter.org> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-30net: filter: don't release unattached filter through call_rcu()Pablo Neira1-1/+5
sk_unattached_filter_destroy() does not always need to release the filter object via rcu. Since this filter is never attached to the socket, the caller should be responsible for releasing the filter in a safe way, which may not necessarily imply rcu. This is a short summary of clients of this function: 1) xt_bpf.c and cls_bpf.c use the bpf matchers from rules, these rules are removed from the packet path before the filter is released. Thus, the framework makes sure the filter is safely removed. 2) In the ppp driver, the ppp_lock ensures serialization between the xmit and filter attachment/detachment path. This doesn't use rcu so deferred release via rcu makes no sense. 3) In the isdn/ppp driver, it is called from isdn_ppp_release() the isdn_ppp_ioctl(). This driver uses mutex and spinlocks, no rcu. Thus, deferred rcu makes no sense to me either, the deferred releases may be just masking the effects of wrong locking strategy, which should be fixed in the driver itself. 4) In the team driver, this is the only place where the rcu synchronization with unattached filter is used. Therefore, this patch introduces synchronize_rcu() which is called from the genetlink path to make sure the filter doesn't go away while packets are still walking over it. I think we can revisit this once struct bpf_prog (that only wraps specific bpf code bits) is in place, then add some specific struct rcu_head in the scope of the team driver if Jiri thinks this is needed. Deferred rcu release for unattached filters was originally introduced in 302d663 ("filter: Allow to create sk-unattached filters"). Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-24team: lb: use sizeof(*fprog) in __fprog_createDaniel Borkmann1-1/+1
sock_fprog and sock_fprog_kern are of equal size, however it's cleaner to just use sizeof(*fprog) instead to always have correct type. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-23net: filter: let unattached filters use sock_fprog_kernDaniel Borkmann1-5/+5
The sk_unattached_filter_create() API is used by BPF filters that are not directly attached or related to sockets, and are used in team, ptp, xt_bpf, cls_bpf, etc. As such all users do their own internal managment of obtaining filter blocks and thus already have them in kernel memory and set up before calling into sk_unattached_filter_create(). As a result, due to __user annotation in sock_fprog, sparse triggers false positives (incorrect type in assignment [different address space]) when filters are set up before passing them to sk_unattached_filter_create(). Therefore, let sk_unattached_filter_create() API use sock_fprog_kern to overcome this issue. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-14net: Replace u64_stats_fetch_begin_bh to u64_stats_fetch_begin_irqEric W. Biederman1-2/+2
Replace the bh safe variant with the hard irq safe variant. We need a hard irq safe variant to deal with netpoll transmitting packets from hard irq context, and we need it in most if not all of the places using the bh safe variant. Except on 32bit uni-processor the code is exactly the same so don't bother with a bh variant, just have a hard irq safe variant that everyone can use. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-11-06net: Explicitly initialize u64_stats_sync structures for lockdepJohn Stultz1-1/+8
In order to enable lockdep on seqcount/seqlock structures, we must explicitly initialize any locks. The u64_stats_sync structure, uses a seqcount, and thus we need to introduce a u64_stats_init() function and use it to initialize the structure. This unfortunately adds a lot of fairly trivial initialization code to a number of drivers. But the benefit of ensuring correctness makes this worth while. Because these changes are required for lockdep to be enabled, and the changes are quite trivial, I've not yet split this patch out into 30-some separate patches, as I figured it would be better to get the various maintainers thoughts on how to best merge this change along with the seqcount lockdep enablement. Feedback would be appreciated! Signed-off-by: John Stultz <john.stultz@linaro.org> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Cc: James Morris <jmorris@namei.org> Cc: Jesse Gross <jesse@nicira.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Mirko Lindner <mlindner@marvell.com> Cc: Patrick McHardy <kaber@trash.net> Cc: Roger Luethi <rl@hellgate.ch> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Simon Horman <horms@verge.net.au> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Wensong Zhang <wensong@linux-vs.org> Cc: netdev@vger.kernel.org Link: http://lkml.kernel.org/r/1381186321-4906-2-git-send-email-john.stultz@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-06-12team: remove synchronize_rcu() called during port disableJiri Pirko1-2/+1
Check the unlikely case of team->en_port_count == 0 before modulo operation. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-17team: add netpoll supportJiri Pirko1-2/+1
It's done in very similar way this is done in bonding and bridge. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-26team: do not allow to map disabled portsJiri Pirko1-1/+2
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-20team: do RCU update path fixupsJiri Pirko1-4/+10
Use rcu_access_pointer and rcu_dereference_protected to access RCU pointer by updater. Use RCU_INIT_POINTER for NULL assignment of RCU pointer. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-20team: Revert previous two changes.David S. Miller1-6/+4
I didn't notice that these were superceded by a more uptodate version of the changes. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-20team: use RCU_INIT_POINTER for NULL assignment of RCU pointerJiri Pirko1-1/+1
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-20team: use rcu_access_pointer to access RCU pointer by writerJiri Pirko1-3/+5
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-19team: use rcu_dereference_bh() in tx pathJiri Pirko1-3/+3
Should be used instead of rcu_dereference, since rcu_read_lock_bh is held. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-19team: lb: introduce infrastructure for userspace driven tx loadbalancingJiri Pirko1-17/+500
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-19team: lb: push hash counting into separate functionJiri Pirko1-7/+16
Also squash hash into one byte Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-06-19team: make team_mode struct constJiri Pirko1-1/+1
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-21team: allow to enable/disable portsJiri Pirko1-1/+1
This patch changes content of hashlist (used to get port struct by computed index (0...en_port_count-1)). Now the hash list contains only enabled ports so userspace will be able to say what ports can be used for tx/rx. This becomes handy when userspace will need to disable ports which does not belong to active aggregator. By default, newly added port is enabled. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-21team: lb: let userspace care about port macsJiri Pirko1-12/+0
Better to leave this for userspace Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-11team: add missed "statics"Jiri Pirko1-2/+2
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-11team: add support for per-port optionsJiri Pirko1-15/+13
This patch allows to create per-port options. That becomes handy for all sorts of stuff, for example for userspace driven link-state, 802.3ad implementation and so on. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-04team: add loadbalance modeJiri Pirko1-0/+188
This patch introduces new team mode. It's TX port is selected by user-set BPF hash function. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>