aboutsummaryrefslogtreecommitdiffstats
path: root/net/openvswitch/datapath.c
diff options
context:
space:
mode:
authorTonghao Zhang <xiangxia.m.yue@gmail.com>2019-11-01 22:23:45 +0800
committerDavid S. Miller <davem@davemloft.net>2019-11-03 17:18:03 -0800
commit04b7d136d015f220b1003e6c573834658d507a31 (patch)
treecc22e3309f1c05eeb3b125f001ce5db6b8f3ad79 /net/openvswitch/datapath.c
parentMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next (diff)
downloadlinux-dev-04b7d136d015f220b1003e6c573834658d507a31.tar.xz
linux-dev-04b7d136d015f220b1003e6c573834658d507a31.zip
net: openvswitch: add flow-mask cache for performance
The idea of this optimization comes from a patch which is committed in 2014, openvswitch community. The author is Pravin B Shelar. In order to get high performance, I implement it again. Later patches will use it. Pravin B Shelar, says: | On every packet OVS needs to lookup flow-table with every | mask until it finds a match. The packet flow-key is first | masked with mask in the list and then the masked key is | looked up in flow-table. Therefore number of masks can | affect packet processing performance. Link: https://github.com/openvswitch/ovs/commit/5604935e4e1cbc16611d2d97f50b717aa31e8ec5 Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Tested-by: Greg Rose <gvrose8192@gmail.com> Acked-by: William Tu <u9012063@gmail.com> Signed-off-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/openvswitch/datapath.c')
-rw-r--r--net/openvswitch/datapath.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index d8c364d637b1..24cb73e62f55 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -227,7 +227,8 @@ void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key)
stats = this_cpu_ptr(dp->stats_percpu);
/* Look up flow. */
- flow = ovs_flow_tbl_lookup_stats(&dp->table, key, &n_mask_hit);
+ flow = ovs_flow_tbl_lookup_stats(&dp->table, key, skb_get_hash(skb),
+ &n_mask_hit);
if (unlikely(!flow)) {
struct dp_upcall_info upcall;