aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-01-07net: ena: Add debug prints for invalid req_id resetsArthur Kiyanovski1-4/+4
Add qid and req_id to error prints when ENA_REGS_RESET_INV_TX_REQ_ID reset occurs. Switch from %hu to %u, since u16 should be printed with %u, as explained in [1]. [1] - https://www.kernel.org/doc/html/latest/core-api/printk-formats.html Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Remove ena_calc_queue_size_ctx structArthur Kiyanovski2-42/+19
This struct was used to pass data from callee function to its caller. Its usage can be avoided. Removing it results in less code without any damage to code readability. Also it allows to consolidate ring size calculation into a single function (ena_calc_io_queue_size()). Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Move reset completion print to the reset functionArthur Kiyanovski1-2/+2
The print that indicates that device reset has finished is currently called from ena_restore_device(). Move it to ena_fw_reset_device() as it is the more natural location for it. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Remove redundant return code checkArthur Kiyanovski1-1/+1
The ena_com_indirect_table_fill_entry() function only returns -EINVAL or 0, no need to check for -EOPNOTSUPP. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Update LLQ header length in ena documentationArthur Kiyanovski1-1/+1
LLQ entry length is 128 bytes. Therefore the maximum header in the entry is calculated by: tx_max_header_size = LLQ_ENTRY_SIZE - DESCRIPTORS_NUM_BEFORE_HEADER * 16 = 128 - 2 * 16 = 96 This patch updates the documentation so that it states the correct max header length. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Change ENI stats support check to use capabilities fieldArthur Kiyanovski3-13/+10
Use the capabilities field to query the device for ENI stats support. This replaces the previous method that tried to get the ENI stats during ena_probe() and used the success or failure as an indication for support by the device. Remove eni_stats_supported field from struct ena_adapter. This field was used for the previous method of queriying for ENI stats support. Change the severity level of the print in case of ena_com_get_eni_stats() failure from info to error. With the previous method of querying form ENI stats support, failure to get ENI stats was normal for devices that don't support it. With the use of the capabilities field such a failure is unexpected, as it is called only if the device reported that it supports ENI stats. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Add capabilities field with support for ENI stats capabilityArthur Kiyanovski3-1/+30
This bitmask field indicates what capabilities are supported by the device. The capabilities field differs from the 'supported_features' field which indicates what sub-commands for the set/get feature commands are supported. The sub-commands are specified in the 'feature_id' field of the 'ena_admin_set_feat_cmd' struct in the following way: struct ena_admin_set_feat_cmd cmd; cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.feat_common.feature_ The 'capabilities' field, on the other hand, specifies different capabilities of the device. For example, whether the device supports querying of ENI stats. Also add an enumerator which contains all the capabilities. The first added capability macro is for ENI stats feature. Capabilities are queried along with the other device attributes (in ena_com_get_dev_attr_feat()) during device initialization and are stored in the ena_com_dev struct. They can be later queried using the ena_com_get_cap() helper function. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: ena: Change return value of ena_calc_io_queue_size() to voidArthur Kiyanovski1-5/+3
ena_calc_io_queue_size() always returns 0, therefore make it a void function and update the calling function to stop checking the return value. Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07af_packet: fix tracking issues in packet_do_bind()Eric Dumazet1-19/+8
It appears that my changes in packet_do_bind() were slightly wrong. syzbot found that calling bind() twice would trigger a false positive. Remove proto_curr/dev_curr variables and rewrite things to be less confusing (like not having to use netdev_tracker_alloc(), and instead use the standard dev_hold_track()) Fixes: f1d9268e0618 ("net: add net device refcount tracker to struct packet_type") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Link: https://lore.kernel.org/r/20220107183953.3886647-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07mptcp: reuse __mptcp_make_csum in validate_data_csumGeliang Tang1-9/+6
This patch reused __mptcp_make_csum() in validate_data_csum() instead of open-coding. Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07mptcp: change the parameter of __mptcp_make_csumGeliang Tang2-4/+5
This patch changed the type of the last parameter of __mptcp_make_csum() from __sum16 to __wsum. And export this function in protocol.h. Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07selftests: mptcp: more stable join tests-casesPaolo Abeni1-52/+68
MPTCP join self-tests are a bit fragile as they reply on delays instead of events to catch-up with the expected sockets states. Replace the delay with state checking where possible and reduce the number of sleeps in the most complex scenarios. This will both reduce the tests run-time and will improve stability. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: dsa: felix: add port fast age supportVladimir Oltean3-0/+50
Add support for flushing the MAC table on a given port in the ocelot switch library, and use this functionality in the felix DSA driver. This operation is needed when a port leaves a bridge to become standalone, and when the learning is disabled, and when the STP state changes to a state where no FDB entry should be present. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20220107144229.244584-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net: mscc: ocelot: fix incorrect balancing with down LAG portsVladimir Oltean1-15/+11
Assuming the test setup described here: https://patchwork.kernel.org/project/netdevbpf/cover/20210205130240.4072854-1-vladimir.oltean@nxp.com/ (swp1 and swp2 are in bond0, and bond0 is in a bridge with swp0) it can be seen that when swp1 goes down (on either board A or B), then traffic that should go through that port isn't forwarded anywhere. A dump of the PGID table shows the following: PGID_DST[0] = ports 0 PGID_DST[1] = ports 1 PGID_DST[2] = ports 2 PGID_DST[3] = ports 3 PGID_DST[4] = ports 4 PGID_DST[5] = ports 5 PGID_DST[6] = no ports PGID_AGGR[0] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[1] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[2] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[3] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[4] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[5] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[6] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[7] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[8] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[9] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[10] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[11] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[12] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[13] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[14] = ports 0, 1, 2, 3, 4, 5 PGID_AGGR[15] = ports 0, 1, 2, 3, 4, 5 PGID_SRC[0] = ports 1, 2 PGID_SRC[1] = ports 0 PGID_SRC[2] = ports 0 PGID_SRC[3] = no ports PGID_SRC[4] = no ports PGID_SRC[5] = no ports PGID_SRC[6] = ports 0, 1, 2, 3, 4, 5 Whereas a "good" PGID configuration for that setup should have looked like this: PGID_DST[0] = ports 0 PGID_DST[1] = ports 1, 2 PGID_DST[2] = ports 1, 2 PGID_DST[3] = ports 3 PGID_DST[4] = ports 4 PGID_DST[5] = ports 5 PGID_DST[6] = no ports PGID_AGGR[0] = ports 0, 2, 3, 4, 5 PGID_AGGR[1] = ports 0, 2, 3, 4, 5 PGID_AGGR[2] = ports 0, 2, 3, 4, 5 PGID_AGGR[3] = ports 0, 2, 3, 4, 5 PGID_AGGR[4] = ports 0, 2, 3, 4, 5 PGID_AGGR[5] = ports 0, 2, 3, 4, 5 PGID_AGGR[6] = ports 0, 2, 3, 4, 5 PGID_AGGR[7] = ports 0, 2, 3, 4, 5 PGID_AGGR[8] = ports 0, 2, 3, 4, 5 PGID_AGGR[9] = ports 0, 2, 3, 4, 5 PGID_AGGR[10] = ports 0, 2, 3, 4, 5 PGID_AGGR[11] = ports 0, 2, 3, 4, 5 PGID_AGGR[12] = ports 0, 2, 3, 4, 5 PGID_AGGR[13] = ports 0, 2, 3, 4, 5 PGID_AGGR[14] = ports 0, 2, 3, 4, 5 PGID_AGGR[15] = ports 0, 2, 3, 4, 5 PGID_SRC[0] = ports 1, 2 PGID_SRC[1] = ports 0 PGID_SRC[2] = ports 0 PGID_SRC[3] = no ports PGID_SRC[4] = no ports PGID_SRC[5] = no ports PGID_SRC[6] = ports 0, 1, 2, 3, 4, 5 In other words, in the "bad" configuration, the attempt is to remove the inactive swp1 from the destination ports via PGID_DST. But when a MAC table entry is learned, it is learned towards PGID_DST 1, because that is the logical port id of the LAG itself (it is equal to the lowest numbered member port). So when swp1 becomes inactive, if we set PGID_DST[1] to contain just swp1 and not swp2, the packet will not have any chance to reach the destination via swp2. The "correct" way to remove swp1 as a destination is via PGID_AGGR (remove swp1 from the aggregation port groups for all aggregation codes). This means that PGID_DST[1] and PGID_DST[2] must still contain both swp1 and swp2. This makes the MAC table still treat packets destined towards the single-port LAG as "multicast", and the inactive ports are removed via the aggregation code tables. The change presented here is a design one: the ocelot_get_bond_mask() function used to take an "only_active_ports" argument. We don't need that. The only call site that specifies only_active_ports=true, ocelot_set_aggr_pgids(), must retrieve the entire bonding mask, because it must program that into PGID_DST. Additionally, it must also clear the inactive ports from the bond mask here, which it can't do if bond_mask just contains the active ports: ac = ocelot_read_rix(ocelot, ANA_PGID_PGID, i); ac &= ~bond_mask; <---- here /* Don't do division by zero if there was no active * port. Just make all aggregation codes zero. */ if (num_active_ports) ac |= BIT(aggr_idx[i % num_active_ports]); ocelot_write_rix(ocelot, ac, ANA_PGID_PGID, i); So it becomes the responsibility of ocelot_set_aggr_pgids() to take ocelot_port->lag_tx_active into consideration when populating the aggr_idx array. Fixes: 23ca3b727ee6 ("net: mscc: ocelot: rebalance LAGs on link up/down events") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20220107164332.402133-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07net/tls: Fix skb memory leak when running kTLS trafficGal Pressman1-0/+1
The cited Fixes commit introduced a memory leak when running kTLS traffic (with/without hardware offloads). I'm running nginx on the server side and wrk on the client side and get the following: unreferenced object 0xffff8881935e9b80 (size 224): comm "softirq", pid 0, jiffies 4294903611 (age 43.204s) hex dump (first 32 bytes): 80 9b d0 36 81 88 ff ff 00 00 00 00 00 00 00 00 ...6............ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000efe2a999>] build_skb+0x1f/0x170 [<00000000ef521785>] mlx5e_skb_from_cqe_mpwrq_linear+0x2bc/0x610 [mlx5_core] [<00000000945d0ffe>] mlx5e_handle_rx_cqe_mpwrq+0x264/0x9e0 [mlx5_core] [<00000000cb675b06>] mlx5e_poll_rx_cq+0x3ad/0x17a0 [mlx5_core] [<0000000018aac6a9>] mlx5e_napi_poll+0x28c/0x1b60 [mlx5_core] [<000000001f3369d1>] __napi_poll+0x9f/0x560 [<00000000cfa11f72>] net_rx_action+0x357/0xa60 [<000000008653b8d7>] __do_softirq+0x282/0x94e [<00000000644923c6>] __irq_exit_rcu+0x11f/0x170 [<00000000d4085f8f>] irq_exit_rcu+0xa/0x20 [<00000000d412fef4>] common_interrupt+0x7d/0xa0 [<00000000bfb0cebc>] asm_common_interrupt+0x1e/0x40 [<00000000d80d0890>] default_idle+0x53/0x70 [<00000000f2b9780e>] default_idle_call+0x8c/0xd0 [<00000000c7659e15>] do_idle+0x394/0x450 I'm not familiar with these areas of the code, but I've added this sk_defer_free_flush() to tls_sw_recvmsg() based on a hunch and it resolved the issue. Fixes: f35f821935d8 ("tcp: defer skb freeing after socket lock is released") Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220102081253.9123-1-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07iavf: remove an unneeded variableJason Wang1-3/+1
The variable `ret_code' used for returning is never changed in function `iavf_shutdown_adminq'. So that it can be removed and just return its initial value 0 at the end of `iavf_shutdown_adminq' function. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07i40e: remove variables set but not usedYang Li1-5/+0
The code that uses variables pe_cntx_size and pe_filt_size has been removed, so they should be removed as well. Eliminate the following clang warnings: drivers/net/ethernet/intel/i40e/i40e_common.c:4139:20: warning: variable 'pe_filt_size' set but not used. drivers/net/ethernet/intel/i40e/i40e_common.c:4139:6: warning: variable 'pe_cntx_size' set but not used. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07i40e: Remove non-inclusive languageMateusz Palczewski2-3/+3
Remove non-inclusive language from the driver. Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07i40e: Update FW API versionMateusz Palczewski1-2/+2
Update FW API versions to the newest supported NVM images. Signed-off-by: Sylwester Dziedziuch <sylwesterx.dziedziuch@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07i40e: Minimize amount of busy-waiting during AQ sendJedrzej Jagielski3-14/+35
The i40e_asq_send_command will now use a non blocking usleep_range if possible (non-atomic context), instead of busy-waiting udelay. The usleep_range function uses hrtimers to provide better performance and removes the negative impact of busy-waiting in time-critical environments. 1. Rename i40e_asq_send_command to i40e_asq_send_command_atomic and add 5th parameter to inform if called from an atomic context. Call inside usleep_range (if non-atomic) or udelay (if atomic). 2. Change i40e_asq_send_command to invoke i40e_asq_send_command_atomic(..., false). 3. Change two functions: - i40e_aq_set_vsi_uc_promisc_on_vlan - i40e_aq_set_vsi_mc_promisc_on_vlan to explicitly use i40e_asq_send_command_atomic(..., true) instead of i40e_asq_send_command, as they use spinlocks and do some work in an atomic context. All other calls to i40e_asq_send_command remain unchanged. Signed-off-by: Dawid Lukwinski <dawid.lukwinski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07i40e: Add ensurance of MacVlan resources for every trusted VFKaren Sornek1-5/+29
Trusted VF can use up every resource available, leaving nothing to other trusted VFs. Introduce define, which calculates MacVlan resources available based on maximum available MacVlan resources, bare minimum for each VF and number of currently allocated VFs. Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: Karen Sornek <karen.sornek@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-07sch_cake: revise Diffserv docsKevin Bracey1-20/+20
Documentation incorrectly stated that CS1 is equivalent to LE for diffserv8. But when LE was added to the table, CS1 was pushed into tin 1, leaving only LE in tin 0. Also "TOS1" no longer exists, as that is the same codepoint as LE. Make other tweaks properly distinguishing codepoints from classes and putting current Diffserve codepoints ahead of legacy ones. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> Link: https://lore.kernel.org/r/20220106215637.3132391-1-kevin@bracey.fi Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-07mptcp: avoid atomic bit manipulation when possiblePaolo Abeni3-31/+38
Currently the msk->flags bitmask carries both state for the mptcp_release_cb() - mostly touched under the mptcp data lock - and others state info touched even outside such lock scope. As a consequence, msk->flags is always manipulated with atomic operations. This change splits such bitmask in two separate fields, so that we use plain bit operations when touching the cb-related info. The MPTCP_PUSH_PENDING bit needs additional care, as it is the only CB related field currently accessed either under the mptcp data lock or the mptcp socket lock. Let's add another mask just for such bit's sake. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: cleanup MPJ subflow list handlingPaolo Abeni5-104/+60
We can simplify the join list handling leveraging the mptcp_release_cb(): if we can acquire the msk socket lock at mptcp_finish_join time, move the new subflow directly into the conn_list, otherwise place it on join_list and let the release_cb process such list. Since pending MPJ connection are now always processed in a timely way, we can avoid flushing the join list every time we have to process all the current subflows. Additionally we can now use the mptcp data lock to protect the join_list, removing the additional spin lock. Finally, the MPJ handshake is now always finalized under the msk socket lock, we can drop the additional synchronization between mptcp_finish_join() and mptcp_close(). Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07selftests: mptcp: add tests for subflow creation failurePaolo Abeni2-1/+78
Verify that, when multiple endpoints are available, subflows creation proceed even when the first additional subflow creation fails - due to packet drop on the relevant link Co-developed-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: do not block subflows creation on errorsPaolo Abeni4-31/+71
If the MPTCP configuration allows for multiple subflows creation, and the first additional subflows never reach the fully established status - e.g. due to packets drop or reset - the in kernel path manager do not move to the next subflow. This patch introduces a new PM helper to cope with MPJ subflow creation failure and delay and hook it where appropriate. Such helper triggers additional subflow creation, as needed and updates the PM subflow counter, if the current one is closing. Additionally start all the needed additional subflows as soon as the MPTCP socket is fully established, so we don't have to cope with slow MPJ handshake blocking the next subflow creation. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: keep track of local endpoint still available for each mskPaolo Abeni5-55/+91
Include into the path manager status a bitmap tracking the list of local endpoints still available - not yet used - for the relevant mptcp socket. Keep such map updated at endpoint creation/deletion time, so that we can easily skip already used endpoint at local address selection time. The endpoint used by the initial subflow is lazyly accounted at subflow creation time: the usage bitmap is be up2date before endpoint selection and we avoid such unneeded task in some relevant scenarios - e.g. busy servers accepting incoming subflows but not creating any additional ones nor annuncing additional addresses. Overall this allows for fair local endpoints usage in case of subflow failure. As a side effect, this patch also enforces that each endpoint is used at most once for each mptcp connection. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: clean-up MPJ option writingPaolo Abeni1-21/+23
Check for all MPJ variant at once, this reduces the number of conditionals traversed on average and will simplify the next patch. No functional change intended. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: fix per socket endpoint accountingPaolo Abeni1-6/+12
Since full-mesh endpoint support, the reception of a single ADD_ADDR option can cause multiple subflows creation. When such option is accepted we increment 'add_addr_accepted' by one. When we received a paired RM_ADDR option, we deleted all the relevant subflows, decrementing 'add_addr_accepted' by one for each of them. We have a similar issue for 'local_addr_used' Fix them moving the pm endpoint accounting outside the subflow traversal. Fixes: 1a0d6136c5f0 ("mptcp: local addresses fullmesh") Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07selftests: mptcp: add disconnect testsPaolo Abeni2-27/+160
Performs several disconnect/reconnect on the same socket, ensuring the overall transfer is succesful. The new test leverages ioctl(SIOCOUTQ) to ensure all the pending data is acked before disconnecting. Additionally order alphabetically the test program arguments list for better maintainability. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: implement support for user-space disconnectPaolo Abeni1-4/+13
Handle explicitly AF_UNSPEC in mptcp_stream_connnect() to allow user-space to disconnect established MPTCP connections Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: cleanup accept and pollPaolo Abeni3-20/+7
After the previous patch, msk->subflow will never be deleted during the whole msk lifetime. We don't need anymore to acquire references to it in mptcp_stream_accept() and we can use the listener subflow accept queue to simplify mptcp_poll() for listener socket. Overall this removes a lock pair and 4 more atomic operations per accept(). Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: full disconnect implementationPaolo Abeni4-28/+98
The current mptcp_disconnect() implementation lacks several steps, we additionally need to reset the msk socket state and flush the subflow list. Factor out the needed helper to avoid code duplication. Additionally ensure that the initial subflow is disposed only after mptcp_close(), just reset it at disconnect time. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: implement fastclose xmit pathPaolo Abeni2-12/+46
Allow the MPTCP xmit path to add MP_FASTCLOSE suboption on RST egress packets. Additionally reorder related options writing to reduce the number of conditionals required in the fast path. Co-developed-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Geliang Tang <geliang.tang@suse.com> Co-developed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-07mptcp: keep snd_una updated for fallback socketPaolo Abeni1-0/+1
After shutdown, for fallback MPTCP sockets, we always have write_seq == snd_una+1 The above will foul OUTQ ioctl(). Keep snd_una in sync with write_seq even after shutdown. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06mlxsw: spectrum: Extend to support Spectrum-4 ASICAmit Cohen4-1/+98
Extend existing driver for Spectrum, Spectrum-2 and Spectrum-3 ASICs to support Spectrum-4 ASIC as well. Currently there is no released firmware version for Spectrum-4, so the driver is not enforcing a minimum version. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: spectrum_acl_bloom_filter: Add support for Spectrum-4 calculationAmit Cohen2-16/+252
Spectrum-4 will calculate hash function for bloom filter differently from the existing ASICs. First, two hash functions will be used to calculate 16 bits result. The final result will be combination of the two results - 6 bits which are result of CRC-6 will be used as MSB and 10 bits which are result of CRC-10 will be used as LSB. Second, while in Spectrum{2,3}, there is a padding in each chunk, so the chunks use a sequence of whole bytes, in Spectrum-4 there is no padding, so each chunk use 20 bytes minus 2 bits, so it is necessary to align the chunks to be without holes. Add dedicated 'mlxsw_sp_acl_bf_ops' for Spectrum-4 and add the required tables for CRC calculations. All the details are documented as part of the code for future use. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: Add operations structure for bloom filter calculationAmit Cohen4-2/+18
Spectrum-4 will calculate hash function for bloom filter differently from the existing ASICs. There are two changes: 1. Instead of using one hash function to calculate 16 bits output (CRC-16), two functions will be used. 2. The chunks will be built differently, without padding. As preparation for support of Spectrum-4 bloom filter, add 'ops' structure to allow handling different calculation for different ASICs. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: spectrum_acl_bloom_filter: Rename Spectrum-2 specific objects for future useAmit Cohen1-25/+25
Spectrum-4 will calculate hash function for bloom filter differently from the existing ASICs. There are two changes: 1. Instead of using one hash function to calculate 16 bits output (CRC-16), two functions will be used. 2. The chunks will be built differently, without padding. As preparation for support of Spectrum-4 bloom filter, rename CRC table to include "sp2" prefix and "crc16", as next patch will add two additional tables. In addition, rename all the dedicated functions and defines for Spectrum-{2,3} to include "sp2" prefix. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: spectrum_acl_bloom_filter: Make mlxsw_sp_acl_bf_key_encode() more flexibleAmit Cohen1-11/+25
Spectrum-4 will calculate hash function for bloom filter differently from the existing ASICs. One of the changes is related to the way that the chunks will be build - without padding. As preparation for support of Spectrum-4 bloom filter, make mlxsw_sp_acl_bf_key_encode() more flexible, so it will be able to use it for Spectrum-4 as well. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: spectrum_acl_bloom_filter: Reorder functions to make the code more aestheticAmit Cohen1-8/+8
Currently, mlxsw_sp_acl_bf_rule_count_index_get() is implemented before mlxsw_sp_acl_bf_index_get() but is used after it. Adding a new function for Spectrum-4 would make them further apart still. Fix by moving them around. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: Introduce flex key elements for Spectrum-4Amit Cohen2-0/+43
Spectrum-4 ASIC will support more virtual routers and local ports compared to the existing ASICs. Therefore, the virtual router and local port ACL key elements need to be increased. Introduce new key elements for Spectrum-4 to be aligned with the elements used already for other Spectrum ASICs. The key blocks layout is the same for Spectrum-4, so use the existing code for encode_block() and clear_block(), just create separate blocks. Note that size of `VIRT_ROUTER_MSB` is 4 bits in Spectrum-4, therefore declare it using `MLXSW_AFK_ELEMENT_INST_U32()`, in order to be able to set `.avoid_size_check` to true. Otherwise, `mlxsw_afk_blocks_check()` will fail and warn. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: Rename virtual router flex key elementAmit Cohen4-12/+12
In Spectrum-4, the size of the virtual router ACL key element increased from 11 bits to 12 bits. In order to reuse the existing virtual router ACL key element enumerators for Spectrum-4, rename 'VIRT_ROUTER_8_10' and 'VIRT_ROUTER_0_7' to 'VIRT_ROUTER_MSB' and 'VIRT_ROUTER_LSB', respectively. No functional changes intended. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-switch: check if the port priv is validIoana Ciornei1-3/+6
Before accessing the port private structure make sure that there is still a non-NULL pointer there. A NULL pointer access can happen when we are on the remove path, some switch ports are unregistered and some are in the process of unregistering. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-mac: return -EPROBE_DEFER from dpaa2_mac_open in case the fwnode is not setIoana Ciornei1-1/+14
We could get into a situation when the fwnode of the parent device is not yet set because its probe didn't yet finish. When this happens, any caller of the dpaa2_mac_open() will not have the fwnode available, thus cause problems at the PHY connect time. Avoid this by just returning -EPROBE_DEFER from the dpaa2_mac_open when this happens. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-mac: bail if the dpmacs fwnode is not foundRobert-Ionut Alexa1-1/+4
The parent pointer node handler must be declared with a NULL initializer. Before using it, a check must be performed to make sure that a valid address has been assigned to it. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06bpf: selftests: Add bind retry for post_bind{4, 6}Menglong Dong1-20/+130
With previous patch, kernel is able to 'put_port' after sys_bind() fails. Add the test for that case: rebind another port after sys_bind() fails. If the bind success, it means previous bind operation is already undoed. Signed-off-by: Menglong Dong <imagedong@tencent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220106132022.3470772-4-imagedong@tencent.com
2022-01-06bpf: selftests: Use C99 initializers in test_sock.cMenglong Dong1-128/+92
Use C99 initializers for the initialization of 'tests' in test_sock.c. Signed-off-by: Menglong Dong <imagedong@tencent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220106132022.3470772-3-imagedong@tencent.com
2022-01-06net: bpf: Handle return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND()Menglong Dong9-0/+11
The return value of BPF_CGROUP_RUN_PROG_INET{4,6}_POST_BIND() in __inet_bind() is not handled properly. While the return value is non-zero, it will set inet_saddr and inet_rcv_saddr to 0 and exit: err = BPF_CGROUP_RUN_PROG_INET4_POST_BIND(sk); if (err) { inet->inet_saddr = inet->inet_rcv_saddr = 0; goto out_release_sock; } Let's take UDP for example and see what will happen. For UDP socket, it will be added to 'udp_prot.h.udp_table->hash' and 'udp_prot.h.udp_table->hash2' after the sk->sk_prot->get_port() called success. If 'inet->inet_rcv_saddr' is specified here, then 'sk' will be in the 'hslot2' of 'hash2' that it don't belong to (because inet_saddr is changed to 0), and UDP packet received will not be passed to this sock. If 'inet->inet_rcv_saddr' is not specified here, the sock will work fine, as it can receive packet properly, which is wired, as the 'bind()' is already failed. To undo the get_port() operation, introduce the 'put_port' field for 'struct proto'. For TCP proto, it is inet_put_port(); For UDP proto, it is udp_lib_unhash(); For icmp proto, it is ping_unhash(). Therefore, after sys_bind() fail caused by BPF_CGROUP_RUN_PROG_INET4_POST_BIND(), it will be unbinded, which means that it can try to be binded to another port. Signed-off-by: Menglong Dong <imagedong@tencent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220106132022.3470772-2-imagedong@tencent.com
2022-01-06Documentation: devlink: mlx5.rst: Fix htmldoc build warningSaeed Mahameed1-0/+1
Fix the following build warning: Documentation/networking/devlink/mlx5.rst:13: WARNING: Error parsing content block for the "list-table" directive: +uniform two-level bullet list expected, but row 2 does not contain the same number of items as row 1 (2 vs 3). ... Add the missing item in the first row. Fixes: 0844fa5f7b89 ("net/mlx5: Let user configure io_eq_size param") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>