| Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch enables the sysctl mtu_expires to be configured per net
namespace.
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch enables the sysctl min_pmtu to be configured per net
namespace.
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Recent bpf-next merge brought in header changes which uncovered
includes missing in net-next which were not present in bpf-next.
Build problems happen only on less-popular arches like hppa,
sparc, alpha etc.
I could repro the build problem with ice but not the mlx5 problem
Abdul was reporting. mlx5 does look like it should include filter.h,
anyway.
Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Fixes: e63a02348958 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next")
Link: https://lore.kernel.org/all/7c03768d-d948-c935-a7ab-b1f963ac7eed@linux.vnet.ibm.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for scatter gather DMA. DMA in PMAC splits
the packet into several buffers when the MTU on the CPU port is
less than the MTU of the switch. The first buffer starts at an
offset of NET_IP_ALIGN. In subsequent buffers, dma ignores the
offset. Thanks to this patch, the user can still connect to the
device in such a situation. For normal configurations, the patch
has no effect on performance.
Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for external timestamp and periodic signal output.
TJA1103 have one periodic signal and one external time stamp signal that
can be multiplexed on all 11 gpio pins.
The periodic signal can be only enabled or disabled. Have no start time
and if is enabled will be generated with a period of one second in sync
with the LTC seconds counter. The phase change is possible only with a
half of a second.
The external timestamp signal has no interrupt and no valid bit and
that's why the timestamps are handled by polling in .do_aux_work.
Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Get originating device from tuple offload metadata match ingress_ifindex,
and set flow_source hint to either LOCAL for vf/sf reps, UPLINK for
uplink/wire/tunnel devices/bond, or ANY (as before this patch)
for all others.
This allows lower layer (software steering or firmware) to insert the tuple
rule only in one table (either rx or tx) instead of two (rx and tx).
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To give drivers the originating device information for optimized
connection tracking offload, fill in act ct extension with
ifindex from skb.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Driver offloading ct tuples can use the information of which devices
received the packets that created the offloaded connections, to
more efficiently offload them only to the relevant device.
Add new act_ct nf conntrack extension, which is used to store the skb
devices before offloading the connection, and then fill in the tuple
iifindex so drivers can get the device via metadata dissector match.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On systems with large numbers of MDIO bus/muxes the message indicating
that a given MDIO bus has been successfully probed is repeated for as
many buses we have, which can eat up substantial boot time for no
reason, demote to a debug print.
Reported-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220103194024.2620-1-f.fainelli@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never
fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
That said, 'high_dma' can only be 1 after a successful
dma_set_mask_and_coherent().
Simplify code and remove some dead code accordingly, including a now
useless parameter to vxge_device_register().
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never
fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
That said, 'dma_flag' can only be 'true' after a successful
dma_set_mask_and_coherent().
Simplify code and remove some dead code accordingly, including the now
useless 'high_dma_flag' field in 'struct s2io_nic'.
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sorry for being rude but new vendors/drivers are supposed to be disabled
by default, otherwise we will have to manually keep track of all vendors
we are not interested in building.
Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
CC: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
pcs-lynx.c used lynx_pcs and lynx as a variable name within the same file.
This standardizes all internal variables to just "lynx"
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A simple variable update from "pcs" to "mdio_device" for the mdio device
will make things a little cleaner.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A simple variable update from "pcs" to "mdio_device" for the mdio device
will make things a little cleaner.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Simple rename of a variable to make things more logical.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove references to lynx_pcs structures so drivers like the Felix DSA
can reference alternate PCS drivers.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In [1], Christoph Hellwig has proposed to remove the wrappers in
include/linux/pci-dma-compat.h.
Some reasons why this API should be removed have been given by Julia
Lawall in [2].
A coccinelle script has been used to perform the needed transformation
Only relevant parts are given below.
@@
expression e1, e2;
@@
- pci_dma_mapping_error(e1, e2)
+ dma_mapping_error(&e1->dev, e2)
[1]: https://lore.kernel.org/kernel-janitors/20200421081257.GA131897@infradead.org/
[2]: https://lore.kernel.org/kernel-janitors/alpine.DEB.2.22.394.2007120902170.2424@hadrien/
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never
fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
Simplify code and remove some dead code accordingly.
Now that qed_set_coherency_mask() is mostly a single call to
dma_set_mask_and_coherent(), fold it in its only caller.
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never
fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
That said, 'pci_using_dac' can only be 1 after a successful
dma_set_mask_and_coherent().
Simplify code and remove some dead code accordingly.
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask() with a 64-bit mask will never
fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
That said, 'pci_using_dac' can only be 1 after a successful
dma_set_mask_and_coherent().
Simplify code and remove some dead code accordingly.
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add comments for both smc_link_sendable() and smc_link_usable()
to help better distinguish and use them.
No function changes.
Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
Moreover, as stated in [1], dma_set_mask_and_coherent() with a 64-bit mask
will never fail if dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
That said, 'pci_using_dac' can only be 1 after a successful
dma_set_mask_and_coherent().
Simplify code and remove some dead code accordingly.
[1]: https://lkml.org/lkml/2021/6/7/398
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use dma_set_mask_and_coherent() instead of unrolling it with some
dma_set_mask()+dma_set_coherent_mask().
This simplifies code and removes some dead code (dma_set_coherent_mask()
can not fail after a successful dma_set_mask())
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Removed spaces and added a tab that was causing an error on checkpatch
Signed-off-by: Hamish MacDonald <elusivenode@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
v3:
- Report 'backlog' (bytes) instead of 'qlen' (number of packets)
v2:
- Fix sparse warning (use rcu_dereference)
This patch adds support for the queue depth in IOAM trace data fields.
The draft [1] says the following:
The "queue depth" field is a 4-octet unsigned integer field. This
field indicates the current length of the egress interface queue of
the interface from where the packet is forwarded out. The queue
depth is expressed as the current amount of memory buffers used by
the queue (a packet could consume one or more memory buffers,
depending on its size).
An existing function (i.e., qdisc_qstats_qlen_backlog) is used to
retrieve the current queue length without reinventing the wheel.
Note: it was tested and qlen is increasing when an artificial delay is
added on the egress with tc.
[1] https://datatracker.ietf.org/doc/html/draft-ietf-ippm-ioam-data#section-5.4.2.7
Signed-off-by: Justin Iurman <justin.iurman@uliege.be>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The pointer link is being re-assigned the same value that it was
initialized with in the previous declaration statement. The
re-assignment is redundant and can be removed.
Fixes: 387707fdf486 ("net/smc: convert static link ID to dynamic references")
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This implements TCP ULP for SMC, helps applications to replace TCP with
SMC protocol in place. And we use it to implement transparent
replacement.
This replaces original TCP sockets with SMC, reuse TCP as clcsock when
calling setsockopt with TCP_ULP option, and without any overhead.
To replace TCP sockets with SMC, there are two approaches:
- use setsockopt() syscall with TCP_ULP option, if error, it would
fallback to TCP.
- use BPF prog with types BPF_CGROUP_INET_SOCK_CREATE or others to
replace transparently. BPF hooks some points in create socket, bind
and others, users can inject their BPF logics without modifying their
applications, and choose which connections should be replaced with SMC
by calling setsockopt() in BPF prog, based on rules, such as TCP tuples,
PID, cgroup, etc...
BPF doesn't support calling setsockopt with TCP_ULP now, I will send the
patches after this accepted.
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This prints net namespace ID, helps us to distinguish different net
namespaces when using tracepoints.
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This adds net namespace ID to the kernel log, net_cookie is unique in
the whole system. It is useful in container environment.
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This adds net namespace ID to diag of linkgroup, helps us to distinguish
different namespaces, and net_cookie is unique in the whole system.
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, rdma device supports exclusive net namespace isolation,
however linkgroup doesn't know and support ibdev net namespace.
Applications in the containers don't want to share the nics if we
enabled rdma exclusive mode. Every net namespaces should have their own
linkgroups.
This patch introduce a new field net for linkgroup, which is standing
for the ibdev net namespace in the linkgroup. The net in linkgroup is
initialized with the net namespace of link's ibdev. It compares the net
of linkgroup and sock or ibdev before choose it, if no matched, create
new one in current net namespace. If rdma net namespace exclusive mode
is not enabled, it behaves as before.
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
currently most cmdq APIs are unified in hclge_comm_cmd.c. Newly developed
cmdq APIs should also be placed in hclge_comm_cmd.c. So there is no need to
keep hclge_cmd.c and hclgevf_cmd.c.
This patch moves the hclge(vf)_cmd_send to hclge(vf)_main.c and deletes
the source files and makefile scripts.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch uses common cmdq init and uninit APIs to replace the old APIs in
VF cmdq module init and uninit module. Then the old VF init and uninit
APIs is deleted.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch uses common cmdq init and uninit APIs to replace the old APIs in
PF cmdq module init and uninit modules. Then the old PF init and uninit
APIs is deleted.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The PF and VF cmdq init and uninit APIs are also almost same espect the
suffixes of API names.
This patch creates common cmdq init and uninit APIs needed by PF and VF
cmdq modules. The next patch will use the new unified APIs to replace init
and uninit APIs in PF module.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch uses common cmdq resource allocate/free/query APIs to replace
the old APIs in VF cmdq module and deletes the old cmdq resource APIs.
Still we kept hclgevf_cmd_setup_basic_desc name as a seam API to avoid too
many meaningless replacement.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch uses common cmdq resource allocate/free/query APIs to replace
the old APIs in PF cmdq module and deletes the old cmdq resource APIs.
Still we kept hclge_cmd_setup_basic_desc name as a seam API to avoid too
many meaningless replacement.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The PF and VF cmdq module resource allocate/free/query APIs are almost the
same espect the suffixes of API names. These same implementations bring
double development and bugfix work.
This patch creates common cmdq resource allocate/free/query APIs called by
PF and VF cmdq init/uninit APIs. The next patch will use the new unified
APIs to replace init/uninit APIs.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch firstly uses new hardware description struct hclge_comm_hw as
child member of hclgevf_hw and deletes the old hardware description child
members. All the hclgevf_hw variables used in VF module is modified
according to the new hclgevf_hw.
Secondly hclgevf_cmd_send is refactored to use hclge_comm_cmd_send APIs.
The old functions called by hclgevf_cmd_send are all deleted. Still we kept
hclgevf_cmd_send to avoid too many meaningless modifications.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch firstly uses new hardware description struct hclge_comm_hw as
child member of hclge_hw and deletes the original child memebers of
hclge_hw. All the hclge_hw variables used in PF module is modified
according to the new hclge_hw.
Secondly hclge_cmd_send is refactored to use hclge_comm_cmd_send APIs. The
old functions called by hclge_cmd_send are deleted and hclge_cmd_send is
kept to avoid too many meaningless modifications.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch create new set of unified hclge_comm_cmd_send APIs for PF and VF
cmdq module. Subfunctions called by hclge_comm_cmd_send are also created
include cmdq result check, cmdq return code conversion and ring space
opertaion APIs.
These new common cmdq APIs will be used to replace the old PF and VF cmdq
APIs in next patches.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch use new common struct hclge_desc to replace struct hclgevf_desc
in VF cmdq module and then delete the old struct hclgevf_desc.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently PF and VF cmdq APIs use struct hclge(vf)_hw to describe cmdq
hardware information needed by hclge(vf)_cmd_send. There are a little
differences between its child struct hclge_cmq_ring and hclgevf_cmq_ring.
It is redundent to use two sets of structures to support same functions.
So this patch creates new set of common cmdq hardware description
structures(hclge_comm_hw) to unify PF and VF cmdq functions. The struct
hclge_desc is still kept to avoid too many meaningless replacement.
These new structures will be used to unify hclge(vf)_hw structures in PF
and VF cmdq APIs in next patches.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently we plan to refactor PF and VF cmdq module. A new file folder
hns3_common will be created to store new common APIs used by PF and VF
cmdq module. Thus the PF and VF compilation process will both depends on
the hns3_common. This may cause parallel building problems if we add a new
makefile building unit.
So this patch combined the PF and VF makefile scripts to the top level
makefile to support the new hns3_common which will be created in the next
patch.
Signed-off-by: Jie Wang <wangjie125@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Set SMFS (SW-managed flow steering) as a default steering mode
instead of DMFS (device-managed flow steering)
In SMFS, the driver writes the STEs (Steering Table Entries) directly
to the device's ICM, which allows for a higher rule insertion rate
than through using FW command interface, as it is done in DMFS.
SMFS/DMFS steering modes can be configured through devlink param
'flow_steering_mode'. The possible values are 'smfs' or 'dmfs'.
The desired 'flow_steering_mode' param value should be set before
enabling switchdev mode.
Example:
# devlink dev param set pci/0000:05:00.0 name flow_steering_mode smfs
# devlink dev eswitch set pci/0000:05:00.0 mode switchdev
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
|
|
When modifying TTL, packet's csum has to be recalculated.
Due to HW issue in ConnectX-5, csum recalculation for modify TTL
is supported through a work-around that is specifically enabled
by configuration.
If the work-around isn't enabled, ignore the modify TTL action
rather than adding an unsupported action.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
|
|
Every matcher has RX and TX paths. When a new matcher is created, its RX
and TX start/end anchors are connected to the respective RX and TX anchors
of the previous and next matchers.
This creates a potential performance issue: when a certain rule is added
to a matcher, in many cases it is RX or TX only rule, which may create a
long chain of RX/TX-only paths w/o the actual rules.
This patch aims to handle this issue.
RX and TX matchers are now handled separately: matcher connection in the
matchers chain is split into two separate lists: RX only and TX only.
when a new matcher is created, it is initially created 'detached' - its
RX/TX members are not inserted into the table's matcher list.
When an actual rule is added, only its appropriate RX or TX nic matchers
are then added to the table's nic matchers list and inserted into its
place in the chain of matchers.
I.e., if the rule that is being added is an RX-only rule, only the RX
part of the matcher will be connected to the chain, while TX part of the
matcher remains detached and doesn't prolong the TX chain of the matchers.
Same goes for rule deletion: when the last RX/TX rule of the nic matcher
is destroyed, the nic matcher is removed from its list.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
|
|
Match on geneve_tlv_option_0_exist field on devices that support STEv1.
Signed-off-by: Muhammad Sammar <muhammads@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
|
|
Tunnel headers are generic encapsulation headers, applies for all
tunneling protocols identified by the device native parser or by the
programmable parser, this support will enable raw matching headers 0 and 1.
Signed-off-by: Muhammad Sammar <muhammads@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
|