aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2015-01-12 17:05:14 -0500
committerDavid S. Miller <davem@davemloft.net>2015-01-12 17:05:14 -0500
commitd2c60b1350c9a3eb7ed407c18f50306762365646 (patch)
tree23ea0c750d5edac292857a411fb3aed22f13e460
parentteam: Remove dead code (diff)
parenttuntap: Increase the number of queues in tun. (diff)
downloadlinux-dev-d2c60b1350c9a3eb7ed407c18f50306762365646.tar.xz
linux-dev-d2c60b1350c9a3eb7ed407c18f50306762365646.zip
Merge branch 'tuntap_queues'
Pankaj Gupta says: ==================== Increase the limit of tuntap queues Networking under KVM works best if we allocate a per-vCPU rx and tx queue in a virtual NIC. This requires a per-vCPU queue on the host side. Modern physical NICs have multiqueue support for large number of queues. To scale vNIC to run multiple queues parallel to maximum number of vCPU's we need to increase number of queues support in tuntap. Changes from v4: PATCH2: Michael.S.Tsirkin - Updated change comment message. Changes from v3: PATCH1: Michael.S.Tsirkin - Some cleanups and updated commit message. Perf numbers on 10 Gbs NIC Changes from v2: PATCH 3: David Miller - flex array adds extra level of indirection for preallocated array.(dropped, as flow array is allocated using kzalloc with failover to zalloc). Changes from v1: PATCH 2: David Miller - sysctl changes to limit number of queues not required for unprivileged users(dropped). Changes from RFC PATCH 1: Sergei Shtylyov - Add an empty line after declarations. PATCH 2: Jiri Pirko - Do not introduce new module paramaters. Michael.S.Tsirkin- We can use sysctl for limiting max number of queues. This series is to increase the number of tuntap queues. Original work is being done by 'jasowang@redhat.com'. I am taking this 'https://lkml.org/lkml/2013/6/19/29' patch series as a reference. As per discussion in the patch series: There were two reasons which prevented us from increasing number of tun queues: - The netdev_queue array in netdevice were allocated through kmalloc, which may cause a high order memory allocation too when we have several queues. E.g. sizeof(netdev_queue) is 320, which means a high order allocation would happens when the device has more than 16 queues. - We store the hash buckets in tun_struct which results a very large size of tun_struct, this high order memory allocation fail easily when the memory is fragmented. The patch 60877a32bce00041528576e6b8df5abe9251fa73 increases the number of tx queues. Memory allocation fallback to vzalloc() when kmalloc() fails. This series tries to address following issues: - Increase the number of netdev_queue queues for rx similarly its done for tx queues by falling back to vzalloc() when memory allocation with kmalloc() fails. - Increase number of queues to 256, maximum number is equal to maximum number of vCPUS allowed in a guest. I have also done testing with multiple parallel Netperf sessions for different combination of queues and CPU's. It seems to be working fine without much increase in cpu load with increase in number of queues. I also see good increase in throughput with increase in number of queues. Though i had limitation of 8 physical CPU's. For this test: Two Hosts(Host1 & Host2) are directly connected with cable Host1 is running Guest1. Data is sent from Host2 to Guest1 via Host1. Host kernel: 3.19.0-rc2+, AMD Opteron(tm) Processor 6320 NIC : Emulex Corporation OneConnect 10Gb NIC (be3) Patch Applied %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle throughput Single Queue, 2 vCPU's ------------- Before Patch :all 0.19 0.00 0.16 0.07 0.04 0.10 0.00 0.18 0.00 99.26 57864.18 After Patch :all 0.99 0.00 0.64 0.69 0.07 0.26 0.00 1.58 0.00 95.77 57735.77 With 2 Queues, 2 vCPU's --------------- Before Patch :all 0.19 0.00 0.19 0.10 0.04 0.11 0.00 0.28 0.00 99.08 63083.09 After Patch :all 0.87 0.00 0.73 0.78 0.09 0.35 0.00 2.04 0.00 95.14 62917.03 With 4 Queues, 4 vCPU's -------------- Before Patch :all 0.20 0.00 0.21 0.11 0.04 0.12 0.00 0.32 0.00 99.00 80865.06 After Patch :all 0.71 0.00 0.93 0.85 0.11 0.51 0.00 2.62 0.00 94.27 86463.19 With 8 Queues, 8 vCPU's -------------- Before Patch :all 0.19 0.00 0.18 0.09 0.04 0.11 0.00 0.23 0.00 99.17 86795.31 After Patch :all 0.65 0.00 1.18 0.93 0.13 0.68 0.00 3.38 0.00 93.05 89459.93 With 16 Queues, 8 vCPU's -------------- After Patch :all 0.61 0.00 1.59 0.97 0.18 0.92 0.00 4.32 0.00 91.41 120951.60 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r--drivers/net/tun.c7
-rw-r--r--net/core/dev.c13
2 files changed, 11 insertions, 9 deletions
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index c0df872f5b8c..74fdf1158448 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -124,10 +124,9 @@ struct tap_filter {
unsigned char addr[FLT_EXACT_COUNT][ETH_ALEN];
};
-/* DEFAULT_MAX_NUM_RSS_QUEUES were chosen to let the rx/tx queues allocated for
- * the netdevice to be fit in one page. So we can make sure the success of
- * memory allocation. TODO: increase the limit. */
-#define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES
+/* MAX_TAP_QUEUES 256 is chosen to allow rx/tx queues to be equal
+ * to max number of VCPUs in guest. */
+#define MAX_TAP_QUEUES 256
#define MAX_TAP_FLOWS 4096
#define TUN_FLOW_EXPIRE (3 * HZ)
diff --git a/net/core/dev.c b/net/core/dev.c
index 683d493aa1bf..805456147c30 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6172,13 +6172,16 @@ static int netif_alloc_rx_queues(struct net_device *dev)
{
unsigned int i, count = dev->num_rx_queues;
struct netdev_rx_queue *rx;
+ size_t sz = count * sizeof(*rx);
BUG_ON(count < 1);
- rx = kcalloc(count, sizeof(struct netdev_rx_queue), GFP_KERNEL);
- if (!rx)
- return -ENOMEM;
-
+ rx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!rx) {
+ rx = vzalloc(sz);
+ if (!rx)
+ return -ENOMEM;
+ }
dev->_rx = rx;
for (i = 0; i < count; i++)
@@ -6808,7 +6811,7 @@ void free_netdev(struct net_device *dev)
netif_free_tx_queues(dev);
#ifdef CONFIG_SYSFS
- kfree(dev->_rx);
+ kvfree(dev->_rx);
#endif
kfree(rcu_dereference_protected(dev->ingress_queue, 1));