aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-01-28SUNRPC: add netns refcount tracker to struct rpc_xprtEric Dumazet2-2/+3
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28SUNRPC: add netns refcount tracker to struct gss_authEric Dumazet1-4/+6
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28SUNRPC: add netns refcount tracker to struct svc_xprtEric Dumazet2-2/+3
struct svc_xprt holds a long lived reference to a netns, it is worth tracking it. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28bnxt: report header-data split stateJakub Kicinski1-0/+3
Aggregation rings imply header-data split. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28ethtool: add header/data split indicationJakub Kicinski4-5/+27
For applications running on a mix of platforms it's useful to have a clear indication whether host's NIC supports the geometry requirements of TCP zero-copy. TCP zero-copy Rx requires data to be neatly placed into memory pages. Most NICs can't do that. This patch is adding GET support only, since the NICs I work with either always have the feature enabled or enable it whenever MTU is set to jumbo. In other words I don't need SET. But adding set should be trivial. (The only note on SET is that we will likely want the setting to be "sticky" and use 0 / `unknown` to reset it back to driver default.) Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28net: dsa: microchip: Add property to disable reference clockRobert Hancock3-3/+13
Add a new microchip,synclko-disable property which can be specified to disable the reference clock output from the device if not required by the board design. Signed-off-by: Robert Hancock <robert.hancock@calian.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28net: dsa: microchip: Document property to disable reference clockRobert Hancock1-0/+6
Document the new microchip,synclko-disable property which can be specified to disable the reference clock output from the device if not required by the board design. Signed-off-by: Robert Hancock <robert.hancock@calian.com> Acked-by: Rob Herring <robh@kernel.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28net: mii: remove mii_lpa_mod_linkmode_lpa_sgmii()Jakub Kicinski1-33/+0
Vladimir points out that since we removed mii_lpa_to_linkmode_lpa_sgmii(), mii_lpa_mod_linkmode_lpa_sgmii() is also no longer called. Suggested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28net: mvneta: remove unnecessary if condition in mvneta_xdp_submit_frameLorenzo Bianconi1-4/+2
Get rid of unnecessary if check on tx_desc pointer in mvneta_xdp_submit_frame routine since num_frames is always greater than 0 and tx_desc pointer is always initialized. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-28net: sparx5: use .mac_select_pcs() interfaceRussell King (Oracle)2-1/+10
Convert sparx5 to use the mac_select_interface rather than using phylink_set_pcs(). The intention here is to unify the approach for PCS and eventually remove phylink_set_pcs(). Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-27ipv6: partially inline ipv6_fixup_optionsPavel Begunkov2-6/+14
Inline a part of ipv6_fixup_options() to avoid extra overhead on function call if opt is NULL. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: optimise dst refcounting on cork initPavel Begunkov2-5/+11
udpv6_sendmsg() doesn't need dst after calling ip6_make_skb(), so instead of taking an additional reference inside ip6_setup_cork() and releasing the initial one afterwards, we can hand over a reference into ip6_make_skb() saving two atomics. The only other user of ip6_setup_cork() is ip6_append_data() and it requires an extra dst_hold(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27udp6: don't make extra copies of iflowPavel Begunkov1-43/+42
udpv6_sendmsg() first initialises an on-stack 88B struct flowi6 and then copies it into cork, which is expensive. Avoid the copy in corkless case by initialising on-stack cork->fl directly. The main part is a couple of lines under !corkreq check. The rest converts fl6 variable to be a pointer. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27udp6: pass flow in ip6_make_skb together with corkPavel Begunkov3-13/+13
Another preparation patch. inet_cork_full already contains a field for iflow, so we can avoid passing a separate struct iflow6 into __ip6_append_data() and ip6_make_skb(), and use the flow stored in inet_cork_full. Make sure callers set cork->fl, i.e. we init it in ip6_append_data() and before calling ip6_make_skb(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: pass full cork into __ip6_append_data()Pavel Begunkov1-3/+4
Convert a struct inet_cork argument in __ip6_append_data() to struct inet_cork_full. As one struct contains another inet_cork is still can be accessed via ->base field. It's a preparation patch making further changes a bit cleaner. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: don't zero inet_cork_full::fl after usePavel Begunkov2-9/+2
It doesn't appear there is any reason for ip6_cork_release() to zero cork->fl, it'll be fully filled on next initialisation. This 88 bytes memset accounts to 0.3-0.5% of total CPU cycles. It's also needed in following patches and allows to remove an extar flow copy in udp_v6_push_pending_frames(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: clean up cork setup/releasePavel Begunkov1-23/+21
Clean up ip6_setup_cork() and ip6_cork_release() adding a local variable for v6_cork->opt. It's a preparation patch for further changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: remove daddr temp buffer in __ip6_make_skbPavel Begunkov1-4/+3
ipv6_push_nfrag_opts() doesn't change passed daddr, and so __ip6_make_skb() doesn't actually need to keep an on-stack copy of fl6->daddr. Set initially final_dst to fl6->daddr, ipv6_push_nfrag_opts() will override it if needed, and get rid of extra copies. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27udp6: shuffle up->pending AF_INET bitsPavel Begunkov1-3/+2
Corked AF_INET for ipv6 socket doesn't appear to be the hottest case, so move it out of the common path under up->pending check to remove overhead. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27ipv6: optimise dst refcounting on skb initPavel Begunkov1-1/+10
__ip6_make_skb() gets a cork->dst ref, hands it over to skb and shortly after puts cork->dst. Save two atomics by stealing it without extra referencing, ip6_cork_release() handles NULL cork->dst. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: spectrum_acl: Allocate default actions for internal TCAM regionsIdo Schimmel2-1/+13
In Spectrum-2 and later ASICs, each TCAM region has a default action that is executed in case a packet did not match any rule in the region. The location of the action in the database (KVDL) is computed by adding the region's index to a base value. Some TCAM regions are not exposed to the host and used internally by the device. Allocate KVDL entries for the default actions of these regions to avoid the host from overwriting them. With mlxsw, lookups in the internal regions are not currently performed, but it is a good practice not to overwrite their default actions. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: spectrum: Guard against invalid local portsAmit Cohen4-7/+10
When processing events generated by the device's firmware, the driver protects itself from events reported for non-existent local ports, but not for the CPU port (local port 0), which exists, but does not have all the fields as any local port. This can result in a NULL pointer dereference when trying access 'struct mlxsw_sp_port' fields which are not initialized for CPU port. Commit 63b08b1f6834 ("mlxsw: spectrum: Protect driver from buggy firmware") already handled such issue by bailing early when processing a PUDE event reported for the CPU port. Generalize the approach by moving the check to a common function and making use of it in all relevant places. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Consolidate trap groups to a single event groupJiri Pirko4-9/+8
For event traps which are used in core, avoid having a separate trap group for each event. Instead of that introduce a single core event trap group and use it for all event traps. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move functions to register/unregister array of traps to core.cJiri Pirko3-49/+58
These functions belong to core.c alongside the functions that register/unregister a single trap. Move it there. Make the functions possibly usable by other parts of mlxsw code. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move basic trap group initialization from spectrum.cJiri Pirko3-31/+21
Instead of initializing the trap groups used by core in spectrum.c over op, do it directly in core.c Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move basic_trap_groups_set() call out of EMAD init codeJiri Pirko1-6/+10
The call inits the EMAD group, but other groups as well. Therefore, move it out of EMAD init code and call it before. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: spectrum: Set basic trap groups from an arrayJiri Pirko1-29/+18
Instead of calling the same code four times, do it in a loop over array which contains trap grups to be set. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27net/mlx5: VLAN push on RX, pop on TXDima Chumak1-1/+3
Some older NIC hardware isn't capable of doing VLAN push on RX and pop on TX. A workaround has been added in software to support it, but it has a performance penalty since it requires a hairpin + loopback. There's no such limitation with the newer NICs, so no need to pay the price of the w/a. With this change the software w/a is disabled for certain HW versions and steering modes that support it. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5: Introduce software defined steering capabilitiesDima Chumak5-0/+51
There are two different internal steering modes, abstracted from the rest of the driver. In order to keep upper layer of the driver agnostic to the differences in capabilities of the steering modes, this patch introduces mlx5_fs_get_capabilities() API to check if a certain software defined capability is supported. It differs from the capabilities exposed by the hardware, as it takes into account the flow steering mode (SMFS/DMFS) currently enabled. This implementation supports only two capability flags: MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX MLX5_FLOW_STEERING_CAP_VLAN_POP_ON_TX They map to DR_ACTION_STATE_PUSH_VLAN and DR_ACTION_STATE_POP_VLAN actions, implemented in SW steering earlier in commit f5e22be534e0 ("net/mlx5: DR, Split modify VLAN state to separate pop/push states"). Which enables using of pop/push vlan without restrictions, e.g. doing vlan pop on TX and RX, compared to FW steering that supports only vlan pop on RX and push on TX. Other capabilities can be added in the future. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5: Remove unused TIR modify bitmask enumsTariq Toukan1-7/+0
struct mlx5_ifc_modify_tir_bitmask_bits is used for the bitmask of MODIFY_TIR operations. Remove the unused bitmask enums. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: CT, Remove redundant flow args from tc ct callsRoi Dayan4-23/+13
The flow arg is not being used so remove it. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Store mapped tunnel id on flow attrRoi Dayan6-23/+10
In preparation for multiple attr instances the tunnel_id should be attr specific and not flow specific. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Test CT and SAMPLE on flow attrRoi Dayan7-32/+82
Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Prepare for multiple attr instances by testing for CT or SAMPLE flag on attr flags instead of flow flag. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Chris Mi <cmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Refactor eswitch attr flags to just attr flagsRoi Dayan10-40/+40
The flags are flow attrs and not esw specific attr flags. Refactor to remove the esw prefix and move from eswitch.h to en_tc.h where struct mlx5_flow_attr exists. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: CT, Don't set flow flag CT for ct clear flowRoi Dayan3-74/+6
ct clear action is a normal flow with a modify header for registers to 0. there is no need for any special handling in tc_ct.c. Parsing of ct clear action still allocates mod acts to set 0 on the registers and the driver continue to add a normal rule with modify hdr context. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Hold sample_attr on stack instead of pointerRoi Dayan5-16/+10
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Parsing TC sample allocates a new memory but there is no symmetric cleanup in the infrastructure. To avoid asymmetric alloc/free use sample_attr as part of the flow attr and not allocated and held as a pointer. This will avoid a cleanup leak when sample action is not on the first attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Reject rules with multiple CT actionsRoi Dayan2-0/+11
The driver doesn't support multiple CT actions. Multiple CT clear actions are ok as they are redundant also with another CT actions. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attrRoi Dayan3-9/+9
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure mlx5e_tc_add_flow_mod_hdr() use the correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Pass attr to tc_act can_offload()Roi Dayan19-34/+53
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure the parsing using correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Split pedit offloads verify from alloc_tc_pedit_action()Roi Dayan1-11/+22
Split pedit verify part into a new subfunction for better maintainability. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Move pedit_headers_action to parse_attrRoi Dayan8-34/+19
Move pedit_headers_action from flow parse_state to flow parse_attr. In a follow up commit we are going to have multiple attr per flow and pedit_headers_action are unique per attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Move counter creation call to alloc_flow_attr_counter()Roi Dayan1-13/+20
Move shared code to alloc_flow_attr_counter() for reuse by the next patches. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Pass attr arg for attaching/detaching encapsRoi Dayan3-13/+20
In later commit that we will have multiple attr instances per flow we would like to pass a specific attr instance to set encaps. Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Currently mlx5e_attach/detach_encap() reads the first attr instance from the flow instance. Modify the functions to receive the attr instance as a parameter which is set by the calling function. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Move code chunk setting encap dests into its own functionRoi Dayan1-51/+93
Split setting encap dests code chunk out of mlx5e_tc_add_fdb_flow() to make the function smaller for maintainability and reuse. For symmetry do the same for mlx5e_tc_del_fdb_flow(). While at it refactor cleanup to first check for encap flag like done when setting encap dests. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net: bridge: vlan: fix memory leak in __allowed_ingressTim Yi1-3/+3
When using per-vlan state, if vlan snooping and stats are disabled, untagged or priority-tagged ingress frame will go to check pvid state. If the port state is forwarding and the pvid state is not learning/forwarding, untagged or priority-tagged frame will be dropped but skb memory is not freed. Should free skb when __allowed_ingress returns false. Fixes: a580c76d534c ("net: bridge: vlan: add per-vlan state") Signed-off-by: Tim Yi <tim.yi@pica8.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Link: https://lore.kernel.org/r/20220127074953.12632-1-tim.yi@pica8.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27igbvf: Remove useless DMA-32 fallback configurationChristophe JAILLET1-16/+6
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27igb: Remove useless DMA-32 fallback configurationChristophe JAILLET1-13/+6
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27igc: Remove useless DMA-32 fallback configurationChristophe JAILLET1-13/+6
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ice: Remove useless DMA-32 fallback configurationChristophe JAILLET1-2/+0
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27iavf: Remove useless DMA-32 fallback configurationChristophe JAILLET1-6/+3
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>